Jump to content
Objectivism Online Forum

.999999999999 repeating = 1

Rate this topic


WI_Rifleman

Recommended Posts

If your axioms are based on a fictional model, you don't have the guarantee of consistency among them that using reality as a model offers.
The word 'model' has a technical meaning pertaining to formal languages. Real or fictional, models are neither consistent nor inconsistent, as consistency and inconsistency are not even applicable to models. And as I mentioned, roughly speaking, the second incompleteness theorem tells us that it is not possible to state an effectively decidable set of axioms of a theory for a sufficiently rich mathematics by making the theory that of an already given model.

You can define the natural numbers from zero to ten and then define the concatenation of a natural number and a digit (0-9) as ten times the number plus the digit.
Your proposal requires having already defined the natural numbers zero through nine, addition, and multiplication. You have not even defined zero, one, or anything else so far. You've only given a statement about cardinality, which you may adopt as an axiom, but it is not a definition. It is not a definition because it does not provide for the eliminability of the symbols. You have two undefined symbols in each statement: the cardinality symbol and the numeral. And though your statement about cardinality does stipulate a condition for a set to have a certain cardinality, you have not given a condition for x = 1. Mind you, not whether the cardinality of a set is the number one, but rather what the number one ITSELF is.

I don't define sets. I define concepts, such as the concept "natural number."
You've been mentioning sets frequently. You used set notation to stipulate certain cardinalities. Anyway, whatever you call the values of the variables, whether sets or concepts, you haven't given a definition of 'natural number' that is relevant to a formal system. For that matter, you haven't said anything about the particulars of any formal system that you might propose. A formal system requires a set of symbols, rules for making formulas, transformation rules such as rules of inference, primitives and axioms. And there must be algorithms to determine whether the formation and transformation rules have been applied and whether a given expression is an axiom.

And I needn't enumerate all units of a concept in order to have a definition. Nor do I need an enumeration to specify a set; once I have defined "natural number," I can use set notation (say, something like: { natural number | all }) to refer to the set of all natural numbers.
You need to define 'is a natural number' or take it as a primitive predicate symbol. Then you need to prove an existence statement that there is an entity that is 'the natural numbers' (the set, concept, or whatever you call it) or, you can have this entity in your theory by taking its existence as an axiom.

Read my posts more carefully.
Oh, please, I am reading quite carefully.

2 of E and the 2 of F refer to two different objects.
I understand that that is your view. But that doesn't diminish the point I made that if you hold that numerals should not be used to refer to certain referents, then, unless one is a realist, the mathematics is not changed a bit by taking certain marks in ink or in pixels not to be the usual numerals.

in E to refer to the size of any set {a, b} where a != b.
Now you have gotten closer to defining two when you say it is "THE size" (caps added). But again, without derivation of the existence of all of these "sizes", you have to take as axiomatic that there are all of these infinitely many different sizes. This may be possible to do, but you haven't shown how it can be done so that there is an effective method to decide whether a formula is or is not one of the axioms.

So my statement presupposes that mathematics has been formalized my way
You may presuppose this, but I do not. Sometime later show me the formalization that you now presuppose to exist, and all of your claims will be much more meaningful in that context.

The difference is that what I'm talking about here is a formalization^2, i.e. a formalization of a formal system within another formal system, not the formal system itself.

Possibly you have in mind that yours is a formal meta-theory for a formal Z or ZF (with or without C and R). That's fine, but a formal meta-theory is still a formal theory, yet you've not shown any of the elements of a formal theory: set of symbols, formation rules, transformation rules, axioms, primitives, and adherence to certain definitional schemata.

Cardinals are just what natural numbers are: cardinalities of sets.
Usually, the development of the cardinals requires first the development of the ordinals. I'm curious how you'll achieve the converse. Edited by LauricAcid
Link to comment
Share on other sites

  • Replies 278
  • Created
  • Last Reply

Top Posters In This Topic

It is not a definition because it does not provide for the eliminability of the symbols.

Aha! This is where our misunderstandings are rooted. When I say "definition," I do not mean the kind of definitions that the current theory, which I object to, has.

The difference is analogous to the previous one with the minimal number of axioms vs. the unlimited number of observations. Just like the minimization of the number of axioms is not a goal nor a consideration of any other kind with me, neither is the minimization of the number of primitive symbols. It is nice and impressive that mathematicians are able to build a system that is isomorphic with objective thought in its relevant parts from such a minimalistic basis, but I am more concerned with objective thought itself.

My future theory will allow any number of primitive (i.e. non-eliminable) symbols, just like it allows any number of observations as primitive ("axiomatic") premises. What most definitions will do is introduce such primitive symbols and state the essentials of the relationships of their referents with the referents of other symbols.

Link to comment
Share on other sites

Your remarks lead me to realize that I should have not demurred making clear that definitions in formal theories are syntactical and that the defintional method in formal theories is not a method of determining concepts. Definitions in formal theories do allow for the expression of concepts, but this aspect is at an informal level and is not part of the definitions themselves.

Your proposal to state essentials about the relations among a number of primitives is achieved in formal systems through axioms. Axioms fix certain relations among primitives so that only certain models will be models of the theory. In other words, as axioms narrow down which models are models of the theory, the theory becomes more "defined". In the best case, a theory has only models, in each cardinality, that are all isomorphic with one another. In other words, the theory is able to "lock down" to being a "description" of just a certain "state-of-affairs", for a given cardinality of the domain of discourse, to within isomorphism. (It is not possible for a theory to "lock down" to just one model in the manner of excluding models that are isomorphic. And, depending on the cardinality of the language, the Lowenheim-Skolem theorem states that it is not possible to "lock down" to only models of a certain cardinality.)

You add that your statements will mention the referents of the primitives. But this is a separate action than that of axioms. Stating referents is part of an interpretation for a theory but is not part of the theory itself. Moreover, and very roughly speaking, a theory cannot state its own interpretation without being inconsistent. The reason is intuitive: Statements of reference lead to evaluations of truth and falsehood, and if a theory (just as a sentence) could refer to its own conditions of falsehood, then this self-reference allows contradictions such as the liar's paradox. The formalization of this as a meta-theorem of logic is sometimes called 'Tarski's undefinability of truth theorem'. I'm pretty sure that if your theory includes specification of the referents of the primitives of the theory, then your theory will be inconsistent as predicted by Tarski's theorem.

Tarski's theorem, Church's theorem (that first order logic is undecidable), the unsolvability of the halting problem for Turing machines (that there is no program to determine whether, in general, a program is free of endless loops) and the first incompleteness theorem are so closely related with one another that one can approach any one of them through any other of them. I mentioned that the incompleteness theorem states that you cannot have a theory that has a decidable set of axioms that is sufficiently rich and is that of a given model, so I strongly suspect that your proposal for a formal theory based on a given model cannot work. Likewise, I strongly suspect that your proposal for a formal theory that states the referents of its own primitive cannot work. These are "two sides of the same coin".

Moreover, while it's fine that your formal theory have a lot of axioms, even an infinite number of them, it must be decidable (there must be an algorithm to determine) whether a formula is an axiom. I predict that this is where you'll run into your first problems, since I don't sense that you have a clear idea of how to ensure that it is decidable what your axioms are, as well as knowing even the scope of all the axioms you'll need for your particular approach.

(In my earlier posts, replace 'second incompleteness theorem' with 'first incompleteness theorem ' or just 'incompleteness theorem'. The second incompleteness theorem comes into play here, but upon reflection I think the explication is more direct in this context with the first incompleteness theorem.)

Edited by LauricAcid
Link to comment
Share on other sites

I mentioned that the incompleteness theorem states that you cannot have a theory that has a decidable set of axioms that is sufficiently rich and is that of a given model, so I strongly suspect that your proposal for a formal theory based on a given model cannot work.

You assume here that all possible propositions of the theory must be decidable for the theory to "work." Reconsider that assumption.

Moreover, while it's fine that your formal theory have a lot of axioms, even an infinite number of them

I didn't say "infinite." I said "unlimited." Meaning that for any natural number n, it is OK for the theory to have n "axioms."

it must be decidable (there must be an algorithm to determine) whether a formula is an axiom.

Why?

Link to comment
Share on other sites

You assume here that all possible propositions of the theory must be decidable for the theory to "work."
I make no such assumption. What has to be decidable is, given an argument, whether it is a proof. Thus, if axioms are used for arguments, then, given a purported axiom, it must be decidable whether the purported axiom is indeed an axiom.

I didn't say "infinite." I said "unlimited." Meaning that for any natural number n, it is OK for the theory to have n "axioms."
Fine, but I'm granting even more, which is that one can have a decidable set of axioms that is infinite.

I wrote, "it must be decidable (there must be an algorithm to determine) whether a formula is an axiom."

Your reply, "Why?"

Earlier in this thread I mentioned (paraphrasing here, but the word 'effective procedure' was used) that in this context, if you propose a formal theory, then there will have to be an effective method to determine whether a formula is an axiom of your theory. You answered that your theory would do this, and I've been mentioning this consideration in post after post. So, while I understand why someone might want to know more about the requirement of an effective method, I hope that you are not know claiming that your theory does not have to meet this requirement, lest huge swaths of our previous conversations are made pointless.

Anyway, the requirement of effective method is at the heart of the notion of formal theories since it is effective method that ensures the very "checkability" that formalization is meant to provide. If one asks, "Why should we require this checkability?", then there are at least two possible responses, which are not necessarily inconsistent with one another: (1) One response would be to explain the need to avoid confusions, ambiguity, and disputes about mathematical proofs, as well as explaining other epistemological considerations. (2) Another response would be to say that the requirement of effective method is "axiomatic" in this context. Along these lines, I am not usually inclined to try to convince people of the need for effective method, but usually I would be content to say: I have no interest in preventing you (anyone) from proposing whatever theories you wish to propose in whatever degree of formality or informality you wish to propose them, but without recognition of the requirement of effective method, there is insufficient basis, for me at least (and I suspect for most people who study the field of formal theories), to evaluate, critique, or discuss your theories as a formal ones. I don't even argue that one cannot provide an account of formal theories that allows obviating the requirement of effective method, but only that such accounts would not be in my (and I suspect most people who study the field of formal theories) present scope of interest, even if there were such accounts. Put more colloquially, if you don't want to be confined to a requirement of drawing within the lines, then fine, but then it makes no sense to submit your work in a context of evaluation that holds this as a central requirement.

In the spirit of (1), I could say a lot more about the importance of effective method (though it's not a subject I'm particularly interested in composing posts about at this time, especially since I would not want to be in the position of trying to convince you about this very basic principle), but you would benefit a lot more from reading some of the much more articulate discussions that are in the standard literature of the subject.

Edited by LauricAcid
Link to comment
Share on other sites

That the axioms need to be stated unambiguously is one non-technical way of stating the principle. The unambiguousness is ensured by there being an algorithm ('algorithm' is itself given an informal definition) to determine whether something is an axiom. The Church-Turing thesis is a proposal to formalize the notion of algorithm based on the extensive and, as far as I know, so far uncontested, observation of algorithms that any algorithm can be formulated in certain specified ways (including recursive functions, Turing machine, etc.), all of which are proven to be equivalent. And the converse of the Church-Turing thesis is that any of these forms (recursive functions, Turing machines, et al.) provide algorithms. So the requirement of a formal language is that there be an algorithm to determine whether a purported proof is indeed a proof, which entails that there must be an algorithm to determine whether any given formula is an axiom.

The connection with incompleteness is that the incompleteness theorem applies only to theories that meet the abovementioned requirement. And for a given model, there may not be an algorithm to determine which sentences are true in that model. So, for certain models, if the axioms of the theory are to be algorithmically checkable, then one can't just declare that the axioms are the set of sentences true in that model. In particular, even paring down just to the natural numbers and their basic operations of addition and multiplication, the incompleteness theorem says that the standard model (the usual arithmetic with which we are all familiar, and which is formulated in set theory) is one of those models for which there is no algorithm to determine for a given sentence whether it is true in that model. So, if one were to state an axiomatization by saying, "all true sentences of reality are axioms" (accepting, for sake of discussion, whatever your philosophical understanding of reality is), then, a fortiori, there could be no algorithm to determine which are axioms of this theory since the set of sentences true in the standard model of plain arithmetic is a subset of the set of sentences true in reality.

As to Tarski's undefinability of truth theorem, I explained in an earlier post that this is so closely related to incompleteness that (as I recall) one can approach Tarski's theorem through incompleteness (and I do know how the converse is true too). The relevance here is that, very roughly speaking, if your formal system is strong enough for even just plain arithmetic, then your system cannot express the meanings of its own terms or your system will be inconsistent. Roughly speaking, the reason is that any sufficiently rich system that can express its own semantics cannot block the liar paradox. Meanwhile, the proof of the incompleteness theorem shows how one can use a variation of the liar paradox but as to provability not truth; that is, if instead of 'This sentence is false' we can "arithmetically code" 'This sentence is unprovable from the axioms' so that if the (algorithmically checkable) axioms for arithmetic referred to in that sentence are consistent, then the "coded" sentence is true, thus there is a true sentence of arithmetic that is unprovable from our axioms. So, Tarski shows that sufficiently rich theories cannot consistently express their own semantics, while Godel shows that sufficiently rich and consistent theories can express their own syntax.

Edited by LauricAcid
Link to comment
Share on other sites

So, for certain models, if the axioms of the theory are to be algorithmically checkable, then one can't just declare that the axioms are the set of sentences true in that model.

Nor do I wish to do so. I propose to have a fixed and manageable set of "axioms," only I don't insist on trying to minimize their number. So my mathematics might have, say, a couple hundred "axioms."

cannot block the liar paradox.

The liar paradox arises only for purely self-referential sentences, such as "This sentence is false." Purely self-referential sentences are as meaningless as division by zero, and should be treated by any decent theory as such.

Link to comment
Share on other sites

I don't know why you put 'axiom' in quote marks, but there's no technical objection to having hundreds, thousands, millions, or even an infinite number of axioms. The only requirement is that it should be purely mechanical to check of any given formula whether it is an axiom. I.e., if you show a string of symbols, then there should be an algorithm to decide whether that string of symbols is an axiom.

As to the liar paradox, first order theories themselves don't determine whether expressions of the language are meaningful. Meaning is given by an interpretation of the language through a structure (which is a function from the symbols of the theory). In this way, all sentences of the language are meaningful upon an interpretation. However, a sufficiently rich theory that can have its well formed formulas express the truth conditions of themselves will be, in a manner similar to the liar paradox, an inconsistent theory. An inconsistent theory is defined as one that includes a formula and the negation of that formula (thus, includes all formulas of the language, since from a contradiction anything can be derived). It turns out, as we would hope it would, that an inconsistent theory is one for which (though its language has an interpretation,) there is no structure that can be evaluated as one in which all the sentences of the theory are true. It's a meta-theorem that an inconsistent theory has no model and, conversely, a theory with no model is an inconsisent theory. Going back to your project, the point here is that even if your theory is a meta-theory, it can express the truth conditions of object theories, but it can't express its own truth conditions without being inconsistent. If you want to express the truth conditions of your meta-theory, then you need to do that in a meta-meta theory.

But not all self-reference leads to inconsistency. For example, a certain sentence, which as Godelized reads 'This sentence is unprovable', is not only meaningful; it is true in the standard model of the axioms of arithmetic.

The probelm of division by zero stems from the fact that the division symbol has a conditional definition. If the conditions of the definition are not met, then we are not guided how to eliminate the symbol. There are different remedies for this, including recasting the definition so that it is not conditional, which is usually acheived by setting the value of division by zero to some designated constant, such as zero itself.

Edited by LauricAcid
Link to comment
Share on other sites

... if your formal system is strong enough for even just plain arithmetic, then your system cannot express the meanings of its own terms or your system will be inconsistent. Roughly speaking, the reason is that any sufficiently rich system that can express its own semantics cannot block the liar paradox.

The liar paradox arises only for purely self-referential sentences, such as "This sentence is false." Purely self-referential sentences are as meaningless as division by zero, and should be treated by any decent theory as such.

One does not actually say "This sentence is false.". Instead one says something similar to:

The sentence which results from replacing each occurrence of the 24th letter in the alphabet by "The sentence which results from replacing each occurrence of the 24th letter in the alphabet by "x" in "x" is false." in "The sentence which results from replacing each occurrence of the 24th letter in the alphabet by "x" in "x" is false." is false.

Now explain to me exactly why this sentence is meaningless.

Link to comment
Share on other sites

I don't know why you put 'axiom' in quote marks
The preferred term with my proposed theory is "observation." But the thing I mean by it is what you call an axiom, and my replies are clearer if I use the same term as you do.

but there's no technical objection to having hundreds, thousands, millions, or even an infinite number of axioms.
Okay, then we have no problem. The incompleteness and related theorems will apply to my theory in exactly the way the do to the current theory. As far as the technicalities of formalization are concerned, the only difference will be that while the current theory has few primitive symbols and axioms and many definitions, my theory will have many of the former and few of the latter. That, and that what you call axioms will be called observations, and the word "definition" will refer to a subset of the observations in addition to the few symbol expansions I'll have.

Those are the technical differences. As you can see, they are relatively minor. The major difference is to be found in the motivation for preferring primitive symbols and observations ("axioms") to symbol expansion definitions--but a difference in motivation does not have any bearing on the workings of the formalism.

The sentence which results from replacing each occurrence of the 24th letter in the alphabet by "The sentence which results from replacing each occurrence of the 24th letter in the alphabet by "x" in "x" is false." in "The sentence which results from replacing each occurrence of the 24th letter in the alphabet by "x" in "x" is false." is false.

Now explain to me exactly why this sentence is meaningless.

For a sentence to be meaningful, it must name a fact of reality. The above sentence asserts that something is false. For simplicity, let's refer to that something as L ; thus, the sentence says, "L is false." For this to be meaningful, L must be meaningful. But, expanding L, we find that L is the sentence "L is false," which is the very sentence whose meaning we are now trying to establish. We have reached an unresolvable circularity while seeking the meaning of the sentence--thus, the meaning of the sentence cannot be established; it is meaningless.

Link to comment
Share on other sites

  • 2 weeks later...

What I know is:

1/3 does not equal 0.3333~ Logically, if we made that infinitely long it would never equal 1/3, just like with the limit of a function:

lim x->a f(x) = L, means that f(x) will be arbitarily close to L provided x is sufficiently close to a, but not equal to a.

1/3 != 0.3333~

3 x 1/3 = 1

3 x 0.3333~ = 0.9999~

Link to comment
Share on other sites

if we made that infinitely long it would never equal 1/3

You cannot make it "infinitely long." What you can validly say is, "No matter how long we make it, it will never equal 1/3," which is true. But it is also true that it gets closer and closer to 1/3 as you keep appending the 3's, and that is exactly the idea with limits.

".3~ = 1/3" means "The series you obtain by starting with .3 and continuously appending 3's converges to 1/3." That is a true statement.

Edited by Capitalism Forever
Link to comment
Share on other sites

Ugh, the debate goes on.

If people have trouble dealing with repeating decimals, what is the state of NON-repeating decimals?

We all know that the circumference of a circle of radius r is 2 * Pi * r

But Pi = 3.1415926535...

To maintain logical consistency the people who believe that infinite decimals DO NOT specify real numbers but rather some limiting process would have to believe that the circumference of a circle gets larger and larger the more decimal digits of Pi we used because there is no number exactly corresponding to Pi since it just exists as an ephemeral limit of some process. Of course this patently contradicts a circle as a single finite object.

Link to comment
Share on other sites

Ugh, the debate goes on.

If people have trouble dealing with repeating decimals, what is the state of NON-repeating decimals?

We all know that the circumference of a circle of radius r is 2 * Pi * r

But Pi = 3.1415926535...

To maintain logical consistency the people who believe that infinite decimals DO NOT specify real numbers but rather some limiting process would have to believe that the circumference of a circle gets larger and larger the more decimal digits of Pi we used because there is no number exactly corresponding to Pi since it just exists as an ephemeral limit of some process. Of course this patently contradicts a circle as a single finite object.

This confuses the act of measuring the length of an object (or the magnitude of the number) with the length itself. The length of the circumference of a circle does not change simply because you've specified more digits of its length-- and the value of pi does not change merely because a supercomuter somewhere found the next digit. Under your logic, you'd think that I believe that a stick's length changes when I measure it with an (inch-ruled) ruler as opposed to a metric meterstick-- this is not being "logically consistent."

Link to comment
Share on other sites

the people who believe that infinite decimals DO NOT specify real numbers but rather some limiting process would have to believe that the circumference of a circle gets larger and larger the more decimal digits of Pi we used because there is no number exactly corresponding to Pi since it just exists as an ephemeral limit of some process.
Who maintains that decimal expansions are not notations for real numbers? Yes, a decimal expansion is notation for a limit of an infinite sequence, but the limit is a real number.

On the other hand, if it is the use of the word 'process' that you're attacking, then I agree that 'process' is undefined as well as gratuitious. There is an infinite sequence, which is not a process, but rather is a function, which is a "process" only in an informal, metaphorical sense. That is, unless one has a theory in which 'process' conveys a primitive or defined predicate (symbol).

But what is an "ephemeral limit"? If a sequence converges to a real number, then the limit of the sequence is the real number to which the sequence converges. The proofs and definitions upon which this is built are precise. There is nothing ephemeral about this unless you consider real numbers themselves to be ephemeral.

Edited by LauricAcid
Link to comment
Share on other sites

".3~ = 1/3" means "The series you obtain by starting with .3 and continuously appending 3's converges to 1/3.".

This is true, but it may not be clear to some people.

0.3 * 3 = 0.9 < 1 < 1.2 = 0.4 * 3 thus 0.3 < 1/3 < 0.4

0.33 * 3 = 0.99 < 1 < 1.02 = 0.34 * 3 thus 0.33 < 1/3 < 0.34

0.333 * 3 = 0.999 < 1 < 1.002 = 0.334 * 3 thus 0.333 < 1/3 < 0.334

Et cetera.

This is what is meant by saying that 0.333... converges to 1/3.

Link to comment
Share on other sites

  • 1 month later...
This is true, but it may not be clear to some people.

0.3 * 3 = 0.9 < 1 < 1.2 = 0.4 * 3 thus 0.3 < 1/3 < 0.4

0.33 * 3 = 0.99 < 1 < 1.02 = 0.34 * 3 thus 0.33 < 1/3 < 0.34

0.333 * 3 = 0.999 < 1 < 1.002 = 0.334 * 3 thus 0.333 < 1/3 < 0.334

Et cetera.

This is what is meant by saying that 0.333... converges to 1/3.

Dudes and dudettes. The problem is not differentiating between '=' for real numbers and '=' for integers. Equality for real numbers is different from equality for integers. The proof relies on the fact that people interpret '=' in the sense of integers and in the middle the author of the proof changes his interpretation to '=' for real numbers. The proof relies on the fact that '=' is an overloaded relation.

Link to comment
Share on other sites

Equality for real numbers is different from equality for integers.
As the integers are embedded in the reals, there is only one equality relation on them (though there are other equivalence relations on them).

The proof relies on the fact that people interpret '=' in the sense of integers and in the middle the author of the proof changes his interpretation to '=' for real numbers.
I've never seen this. Specifically, the proof I posted does not change "interpretation". The proof does depend on the natural numbers and rational numbes being isomorphically embedded in the real numbers, but that that they are is well established.
Link to comment
Share on other sites

As the integers are embedded in the reals, there is only one equality relation on them (though there are other equivalence relations on them).

I've never seen this. Specifically, the proof I posted does not change "interpretation". The proof does depend on the natural numbers and rational numbes being isomorphically embedded in the real numbers, but that that they are is well established.

If you look at the construction of the the integers from the naturals, then the construction of the rationals, from the integers, and so on up to reals, you will see that it is not the same equality. p/q=m/n whenever pn=qm, equality in the rationals involves more than equality for the integers, one is defined in terms of the other. Equality for the reals involves even more concepts than simple multiplication, like in the case of the rational numbers. One can define reals as sequences of rationals or as dedekind cuts, most books define the reals in terms of sequences of rationals and one says that two real numbers are "equal" whenever the sequences that define them have the same limiting value. So when one says .9999999999... = 1, if one interprets the equals sign as equality for integers, the the assertion is obviously false, but when one spells out the details and says that the real number .99999999... defined by a certain sequence and the number 1 defined by another sequence have the same limiting value then there is nothing mysterious about the assertion .9999999... = 1. The whole thing relies on people not knowing enough about sequences and series to interpret the equality sign in the right way.

Link to comment
Share on other sites

If you look at the construction of the the integers from the naturals, then the construction of the rationals, from the integers, and so on up to reals, you will see that it is not the same equality.
The construction takes place in set theory. In set theory, the equality symbol has an axiom for it, along with the axioms of identity theory. The equality predicate applies in just the same way to any objects, no matter how they've been constructed. However, it is true that the naturals, integers, and rationals are not real numbers so that we cannot evaluate them, per equality (or per ANYTHING), as reals. But we can evaluate them, per equality, as IF they are reals. What you're leaving out is that there is an EMBEDDING from the naturals, integers, and rationals into the reals, so that when we talk about, say 1, as a real number but also as if it were a natural number, this is permissible since the object that we informally mean by '1' (where we really should have a subscript or something to indicate that we don't mean the natural number 1 but rather the value for the natural number 1 from the embedding; actually, to be precise, the value for the rational number (from the embedding of the rationals into the reals) that is the value for the integer (from the embedding of the integers into the rationals) that is the value for the natural number 1 (from the the embedding of the naturals into the integers)) behaves in the reals just as it behaves with other naturals in the naturals with respect to any relation with other embedded naturals in the reals.

One can define reals as sequences of rationals or as dedekind cuts, most books define the reals in terms of sequences of rationals
No, usually one defines reals as EQUIVALENCE CLASSES of sequences of rationals.

So when one says .9999999999... = 1, if one interprets the equals sign as equality for integers, the the assertion is obviously false
.999... is not an integer, but the mistake people make is not that of misinterpreting the equality sign for .999... as an integer. You've reversed things. The equality predicate is fixed. What varies, is that INFORMALLY mathematicians (taking advantage of the embeddings) talk about reals, integers, rationals, and naturals as if they're all in one number sytstem, but only because, mathematicians don't allow this informality to lead to erroneous inferences. And this informality is not the source of non-mathematicians' misunderstanding. What is the soucre is mentioned here:

The whole thing relies on people not knowing enough about sequences and series to interpret the equality sign in the right way.
No, the problem with people who don't know the mathematics is not that of misunderstanding equality, but rather of, just as you said, not knowing about sequences and series, and, I add, not knowing that the notation .999... does indicate the limit of an infinite sequence.

But I do agree that, in general, if people understood set theoretic equality, which allows for a "court of last resort" for comparing objects from different number systems, then people would have a much clearer understanding of these things.

By the way, there is a way to construct the integers, rationals, and reals from the naturals so that the naturals, integers, and rationals end up being just what they are (not members of the range of embeddings) in the final number system which is a complete ordered field. See Azriel Levy's 'Basic Set Theory' (poorly titled since the book offers a trove of non-basic information) which in paperback is a steal to purchase, though I think his formulations and explanations in the first thirty or so pages are unneccessarily complicated (but the exposition soon enough gets into a fine groove and packed with material).

Edited by LauricAcid
Link to comment
Share on other sites

Moose,

That's something different. That .999... = 1 isn't a fallacy, it's just counterintuitive, kind of like some of Xeno's paradoxes.

Pat Corvini gave a fascinating lecture regarding Xeno's paradox and the concept of infinity at last summer's conference in San Diego - I imagine it can be found in the Ayn Rand Bookstore on ARI's website. I highly reccomend it.

Edited by Elle
Link to comment
Share on other sites

By the way, there is a way to construct the integers, rationals, and reals from the naturals so that the naturals, integers, and rationals end up being just what they are (not members of the range of embeddings) in the final number system which is a complete ordered field. See Azriel Levy's 'Basic Set Theory' (poorly titled since the book offers a trove of non-basic information) which in paperback is a steal to purchase, though I think his formulations and explanations in the first thirty or so pages are unneccessarily complicated (but the exposition soon enough gets into a fine groove and packed with material).

Great. You made my point quite well. But I don't understand what you mean by the above paragraph.

Link to comment
Share on other sites

I don't understand what you mean by the above paragraph.
Usually, as you mentioned, we build one number system from another by using equivalence relations. So the natural numbers, for example, end up not as a subset of the reals, but rather the naturals get mapped into the integers get mapped into the rationals get mapped into the reals. So what we get is an image of an image of an image of the naturals. Isomorphisms preserve all the relations, so it's okay, but still we don't really have the naturals or the integers or the rationals as subsets of the reals. Instead we have images of these that are subsets of the reals. Not a problem for math, but we might pine just a bit for having the unreconstituted numbers in the end product.

What Levy does (I don't know whose method it is originally):

Take the natural numbers. Then define a set that is what we usually think of as the negative integers. Then the integers is the union of the naturals with the new set. So the naturals are subset of the integers.

Then take the integers. Then define a set that is the non-integer "fractions". Then the rational numbers are the union of the integers and the non-integer "fractions". So the integers are a subset of the rationals.

Then take the rationals. And define the irrationals. Then the reals are the union of the rationals and the irrationals. So the rationals are a subset of the reals.

And the defined operations and the ordering relations work out just fine at each step as we do lump different kinds of numbers into unions.

Well, having the numbers as subsets is the way we were taught as children. But as children we didn't demand axiomatic construction. And, it turned out, that mathematics found an axiomatic construction that is ingeniuous and lucid in the steps of the CONSTRUCTION but ends up with images as subsets, not the originals as subsets. But the method Levy conveys actually realizes the more intuitive RESULT that we were taught as children: that the naturals are a subset of the integers subset of the rationals subset of the reals.

Check it out. Page 216 of his set theory book. Oh, and he does the whole construction in four pages (since verifying most of the properties along the way is routine and thus unstated). Quite nice.

Edited by LauricAcid
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...