Jump to content
Objectivism Online Forum

.999999999999 repeating = 1

Rate this topic


WI_Rifleman

Recommended Posts

  • Replies 278
  • Created
  • Last Reply

Top Posters In This Topic

Some misconceptions of yours and other difficulties with your views, not necessarily in order of importance:

1. You hold that numbers should be defined by "reference to what they represent in reality" but you're stuck thinking that the set theoretic constructions commit to definitions of numbers as descriptions, which they are not. Rather, the constructions are indeed means of forming representatives.

2. You hold that foundations be "the right ones", but you don't mention any axiomatization that you think is a "right one".

3. You hold that mathematics should obey some non-mathematical definition of the word 'set'. (See my previous post about this, especially since, formally, set theory doesn't even need to mention sets.) And you hold that the "meaning" of the word in formal mathematics should be the same as that outside of mathematics. This reflects that you don't understand even the notion of a formal language.

4. You define "meaning" as tied to mental groupings, but without any suggestion how this relates to formal semantics.

5. Your use of terms you've introduced, including 'class', 'specification', 'grouping's, 'capture', 'nature', 'aspects', 'attributes', and 'predicates', is vague and circular without suggestion of which are the primitve terms and what are their axioms.

6. Your notion of the "usefulness" of set theory seems to me to be not much more than that of echoing mental appraisals of physical observations.

7. You proposed a formal definition of '2'. So when the challenge under discussion was that of defining '2' to stand for a certain object, you instead stated a formula (which is a theorem of set theory anyway) that asserts the necessary and sufficient condition for a set to have cardinality of two. So you didn't define '2' but instead brought in yet another symbol - one for cardinality - and restated a set theoretic theorem about cardinality and 2.

8. You wrote, "When you define a number to be a set, you are declaring that the meaning of a symbol should be something you just invented." This may be your theory of definition, but it has virtually nothing to do with formal theories, since definitions in formal theories are syntactical not semantical.

9. Your solution to Russell's paradox is to avoid circularity. But what is circularity? Self-reference? But you have no formal system or axiomatization for avoiding self-reference or for avoiding certain pernicious self-reference while allowing other acceptable or even needed forms of self-reference. You wrote:

"A circular (purely self-referential) statement is not meaningful because it does not say anything about reality."

'x = x' is self-referential. As I mentioned above, you need to provide an effective method for disallowing only certain kinds of self-reference, or you need to provide a system that provides an effective method of determining self-referential formulas and disallowing them altogether while still allowing formulas that mention a term more than once but not (as you somehow make effectively decidable) self-referentially.

10. You asked, "Why cannot 2 simply be a primitive symbol (with 2 = 1 + 1 postulated)?"

(1) It can be. (2) But since '2' can be defined, there's no need to take it as an additional primitive. (3) And if we were to take each natural number as having a primitive symbol to represent it, then we'd have an infinite number of additional primitive symbols with an infinite number of additional axioms for each natural number. And though that is okay in principle (put pretty uneconomical, I'd say), it would have to be shown that there is an effective method to determine which formulas are axioms. (4) Instead, we might just take '0' and '+' as primitive and allow all the other natural numbers to be "generated" from these primitives. That's what Peano arithmetic does. But Peano arithmetic doesn't give us the set of natural numbers as an object. And, for analysis, we need that set of natural numbers. And set theory is what gives us that set while Peano arithmetic is itself constructed from set theory.

11. You wrote, "Or if you like, you could even postulate 2 = |{{},{{}}}| instead."

That's a theorem of set theory anyway.

12. You wrote:

"the meaning of "where A, B are sets" is "where A and B refer to mental groupings of objects, arising either as enumerations or as combinations of a base class and a predicate."

I asked how you'd formalize that.

You responded:

"Somehow like, "A element-of S and B element-of S." S and element-of are primitive symbols of the formal system."

(1) The part about 'B' is superfluous'. (2) Since 'A element-of S' is just a primitive formula, it's not a definition and by just pointing to a primitive formula of set theory, you've not done any formalizing that set theory hasn't already accomplished, and especially, you haven't formalized 'mental grouping', 'arising', 'enumerations or combinations', 'base class', and 'predicate'. Or, if you have (but you haven't), then you've given a formalization that is just the most basic set theory itself, in which case, if your insistence that set theory is incorrect were the case, then your own formalization, which is no different, would be incorrect.

13. I asked about the Objectivist concept of 'essential', especially in a mathematical context. You referred me to ITOE. That text does not provide an explanation of how the concept of 'essential' works as a mathematical criterion, as well as the explanation of 'essential' in even a general sense is not adequate. By the way, supporting the concept of essentiality is a crucial point if Objectivism is really to provide its own meaningful answer to the problem of universals.

14. You explain infinity in terms of "potenials", but in what formal system is 'potential' defined? Or if 'potential' is primitive, then what are the axioms for it?

15. You claim that the set of natural numbers is known to exist since they are "specified" by a "base class" and a "predicate" that is "true in every instance".

What base class? What predicate?

Edited by LauricAcid
Link to comment
Share on other sites

LauricAcid, you have one major misconception.

We do not live so we can create formal languages; we create formal languages so we can live. The act of proving a theorem in a formal language does indeed consist of a mere manipulation of symbols, regardless of what meaning they have--they might as well be meaningless--but the purpose for creating a formal language in the first place is to describe observations of reality and to be able make inferences from them. So one ought to know--OUTSIDE the formal system--what in reality the symbols of the formal language refer to.

I would like to stress, emphasize, draw attention to, accentuate, underscore, and re-emphasize the word OUTSIDE in the above sentence. I will NOT give you a formal definition of "set." But I will tell you, in--ultra-emphasis on!--plain English, what in reality the concept "set" refers to, and what in reality the number two refers to, so that when I prove using a formal system that a certain set has a cardinality of two, you can use that information to benefit your life.

Link to comment
Share on other sites

"We do not live so we can create formal languages."

If you think that I deny the above sentence then that's a misconception you have about me.

Also, if you knew more about semantics for formal systems, then you'd know the sense in which 'outside' is indeed dealt with.

"[...] so that when I prove using a formal system [...]"

You have no formal system.

"[...] you can use that information to benefit your life."

(1) While man's most basic motivation for mathematics is surely to deal with the physical world, it would be merely a stipulation that correct mathematics is only that which benefits your life if 'benefit' excludes the intellectual interests of mathematicians that we may not know to have practical application at this time or even some future time. (2) I don't imagine that you have not benefited from all the mathematics that has allowed the technology you use, including the very computer you're using now. This is virtually all mathematics formed with no regard whatsoever for your own personal or Objectivist philosophy of what mathematics should be.

Edited by LauricAcid
Link to comment
Share on other sites

"We do not live so we can create formal languages."

If you think that I deny the above sentence then that's a misconception you have about me.

I hope so.

You have no formal system.

I do not (yet?) have a formal system that is explicitly based on reality. But mathematics IS implicitly based on reality, no matter how hard some mathematicians try to pretend it isn't. The word "two" DOES refer to "a count where there is one item and another item but no more" ; pretending that 2 = {{}, {{}}} does not change this fact.

I don't imagine that you have not benefited from all the mathematics that has allowed the technology you use, including the very computer you're using now.

I certainly have benefited from it, precisely because the implicit basis it has in reality.

Consider a formal system where the axioms are:

  • 1. hula bula
  • 2. blabla krakula

and where the rules of inference are duplication and concatenation. This would allow me to "prove," for example, the proposition "hula bula hula bula blabla krakula."

Is this a useful formal system? Can it help anyone build a computer, or any other technological feat? Not as long as you refuse to give a plain-English explanation of just what exactly "hula bula" and "blabla krakula" means and why they can be validly duplicated and concatenated. Without an objective and real referent for each symbol, a langage--be it formal or informal--is just an exercise in "garbage in, garbage out." The input needs to be meaningful for the output to be meaningful.

Link to comment
Share on other sites

[...] pretending that 2 = {{}, {{}}} does not change this fact.

It is possible to define 2 in such a way. However, it is of no practical value. Just as in your example with the axioms. You can invent any kind of axioms and any kinds of rules. If you are consistent with your application of them, you can have any kind of system. However, the longevity of a system based on arbitrary axioms is determined by its practical use - if there is none, it won't survive. It will be dismissed as junk.

Too bad it doesn't work the same way with philosophy.

Link to comment
Share on other sites

I do not (yet?) have a formal system that is explicitly based on reality.

Evaluating your success in this endeavor requires judging whether you've met these criteria:

1. That your system is indeed a formal system such that there is an effective procedure to decide whether something is a proof.

2. That your system provide at least all of the mathematics that is needed for all of the technology that is already based on or has been developed in collaboration with the mathematics you reject. (And since we cannot perfectly predict the technological applications of all mathematics, we might like for your to system allow for areas of mathematical exploration.)

3. That you've provided some definition of 'explicitly based on reality'.

But mathematics IS implicitly based on reality, no matter how hard some mathematicians try to pretend it isn't.

Again, it would help if you defined this distinction of yours between implicit and explicit basing on reality. After I pointed out that the mathematics you object to is the same mathematics used for the technology you use, your response is that this is possible because the mathematics you object to is implicitly based on reality. But you seem to think that these implicit assumptions have been betrayed in some explicit manifestation. What you don't recognize is that the explicit formulations are indeed a way of representing the implicit assumptions, while you have not given an iota of proof that these representations can be given without the kinds of formulations and abstractions you object to.

You're stuck with your own naive view that there must be an explicit correspondence between every mathematical formula and some fact in the world outside the formulation. But beyond the mere event of counting, that's not how it works, and you haven't shown that it can work that way. Maybe an analogy will help. It's as if one were looking at the index of a book and said, "What the hell does this mean? These words that follow one another don't even form sentences! What gibberish. 'Alcatraz, Adams, Ajax, azure' That expresses no fact of reality whatsoever. This part of the book is arbitrary nonsense.' Of course, such an appraisal omits that the listed words don't function as descriptions of facts but rather serve in a different structural relation. In this way, mathematical formulas are as words in an index: their explicit structures are not necessarily literally meaningful, but rather rely upon another level of structure. I'm not offering this as an argument by analogy, but rather I am suggesting it as illustrative of the way you are stuck at a literal level that misses the point of understanding a level of abstraction that is on yet another level.

Generally, foundational mathematicians especially have no objection to putting mathematics in accord with certain philosophical tenets. Indeed, as I alluded to in the thread you referenced, some mathematicians would prefer different foundations (and there are always other proposals on the table). But alternatives must be sufficient. They must be actual systems, not just hopes for systems. If mathematics could get a tighter correspondence such as you advocate, then that would be great. But the rub is, how to do it? I encourage you to try to find out how. But you'd have to move considerably past your present glibness. First, one needs to know about formal systems. Then you would have to find solutions that have alluded mathematicians for over a hundred years now. This is not impossible, but it does require proof in the pudding. Anyone can say, "Don't swing a baseball bat that way! There's a better way to hit a baseball," as long as you're not obligated to show a better way.

The word "two" DOES refer to "a count where there is one item and another item but no more" ; pretending that 2 = {{}, {{}}} does not change this fact.

1. You're confusing a syntactical identity formula with a semantical statement.

2. You keep overlooking that the sense of these expressions of yours such as 'a count where there is one item and another item but no more' ARE captured by set theory, even as your OWN formalizations were just the same ones that set theory already has.

3. A minor point, but you're using Zermelo's construction here, while von Neumann's has long since supplanted it as the usual one.

4. Just in case you think that the braces notation here reveals some kind of ontological vacuity, you should know that: i) The braces notation need not be part of the official formalization, and ii) It is not required that the empty set itself is nothing, since there is no stipulation that the empty set cannot be some chosen object that has no members, especially as the words 'the empty set' are an informal linguistic convenience. That is, set theory does not assume that there exists some object which is itself nothing or nothingness.

Consider a formal system where the axioms are:

[*]1. hula bula

[*]2. blabla krakula

and where the rules of inference are duplication and concatenation. This would allow me to "prove," for example, the proposition "hula bula hula bula blabla krakula."

Is this a useful formal system? Can it help anyone build a computer, or any other technological feat? Not as long as you refuse to give a plain-English explanation of just what exactly "hula bula" and "blabla krakula" means and why they can be validly duplicated and concatenated. Without an objective and real referent for each symbol, a langage--be it formal or informal--is just an exercise in "garbage in, garbage out." The input needs to be meaningful for the output to be meaningful.

1. I don't know whether your example can be useful. It might express relations in some system of electronics, or switching, genetic mutation, or biological reproduction, for all I know. And the fact that you chose silly sounding words has nothing to do with this.

2. Even if the system had no interpretation, as long as the system is consistent, then it may have an interpretation later.

3. Mathematicians don't usually propose theories that don't have intended interpretations. In explaining the nature of formal systems, sometimes very silly and arbitrary systems are given without interpretation, but the purpose of this is to emphasize the difference between syntax and interpretation. Meanwhile, you keep posting in complete disregard of the fact that mathematics does have a semantics.

4. Are you declaring that all mathematics must serve technology? A great deal of mathematics does serve technology, and I hardly doubt that man's most basic inclination toward mathematics is to deal with the physical world. But not all mathematical thinking is thus motivated. Since the Greeks, mathematicians have wondered about the abstract relations among abstract mathematical objects whether or not some technological development accrues from such investigations. And I surmise that often enough the technological application is only discovered well after the theoretical investigations.

5. Your analogy with input and output misses the very point that mathematics is not data but rather mathematics provides structures (not necessarily 'structures' in the technical senses here) that are abstract from data but are structures in which to evaluate data and their relations.

Link to comment
Share on other sites

It is possible to define 2 in such a way. However, it is of no practical value.

How do you know that?

Just as in your example with the axioms. You can invent any kind of axioms and any kinds of rules. If you are consistent with your application of them, you can have any kind of system.

Consistency or lack of consistency is a property of systems and axiomatizations themselves, not of their application.

However, the longevity of a system based on arbitrary axioms is determined by its practical use - if there is none, it won't survive. It will be dismissed as junk.

Then the formulation you just rejected as having no practical value does have practical value since it has survived (in a slighly modified form, and the modification is one only of concern for elegance) for nearly a hundred years now.

Edited by LauricAcid
Link to comment
Share on other sites

Evaluating your success in this endeavor requires judging whether you've met these criteria:

1. That your system is indeed a formal system such that there is an effective procedure to decide whether something is a proof.

Sure.

2. That your system provide at least all of the mathematics that is needed for all of the technology that is already based on or has been developed in collaboration with the mathematics you reject.

The only part of the currently mainstream mathematics I reject is the lack of an (explicit) objective foundation and (in some cases, as with the natural numbers) the use of a rationalistic foundation as a substitute for it. Once I have reproduced the axioms of an existing theory, either as objective axioms or as theorems, the rest of the theory follows automatically.

3. That you've provided some definition of 'explicitly based on reality'.

Thanks to Ayn Rand and Objectivist epistemology, I have ample knowledge on what reality-orientation means, so that should be no problem.

But I have to stress again that this something that necessarily has to be done before the formal system is created, and therefore outside it.

---

I will respond to the rest of your post early next week.

Link to comment
Share on other sites

How do you know that?

If you hold it does, then prove it. I claim that it doesn't, because claiming that it might would be embracing the arbitrary as probable.

Consistency or lack of consistency is a property of systems and axiomatizations themselves, not of their application.
Yet if you are inconsistent in applying the axioms, there is no longer a system to talk about. There is a hodgepodge of statements, but that is not a system.

Then the formulation you just rejected as having no practical value does have practical value since it has survived (in a slighly modified form, and the modification is one only of concern for elegance) for nearly a hundred years now.

Survived how? It's being remembered? So are the dead.

Edit: Changed "claim" into "hold".

Edited by source
Link to comment
Share on other sites

If you hold it does, then prove it. I claim that it doesn't, because claiming that it might would be embracing the arbitrary as probable.

1. You made a universal claim that no practical value can come from the formula, so the burden of proof is on you, not me.

2. And your own proof is that to hold the negation entails "embracing the arbitrary as probable." That requires that you define 'arbitrary' and 'probable'. Also, you need to show that holding that something might be the case entails holding that it probably is the case. But who, other than you, thinks that holding that something might be true entails holding that it is probably true. If one says, "This piece of fabric in my hand might have some practical value" does not entail that one holds that the piece of fabric probably has some practical value.

3. Anyway, that the formula has practical value is seen by the fact that it is an important part of the development of mathematics that led to and is used with the theory of computability. The author of the formula, John von Neumann is among the very most important people (arguably, is the most important person) in the invention of the modern computer and computer programming. The set theoretic development of the number systems is an important part of the mathematical theory that led to the theory of computability and to building the first modern computers. Work at these levels of abstraction and complexity is not done in a theoretical vacuum. Certain formulations that assist mathematicians in the rigor, clarity, and unification of their abstractions then contribute to the theory that is the intellectual reservoir upon which the applied sciences draw.

Yet if you are inconsistent in applying the axioms, there is no longer a system to talk about. There is a hodgepodge of statements, but that is not a system.

If the axioms and rules of inference are consistent, then there is no inconsistent application of them within a formal system. Or you have some special (or any) meaning of 'application' in mind that is relevant to the study of formal systems.

Survived how? It's being remembered? So are the dead.

Not just remembered, but used every single day, including today, by thousands and thousands of mathematicians, as the standard set theoretical definition.

Link to comment
Share on other sites

1. You made a universal claim that no practical value can come from the formula, so the burden of proof is on you, not me.

Are you joking?

2. And your own proof is that to hold the negation entails "embracing the arbitrary as probable." That requires that you define 'arbitrary' and 'probable'. Also, you need to show that holding that something might be the case entails holding that it probably is the case. But who, other than you, thinks that holding that something might be true entails holding that it is probably true. If one says, "This piece of fabric in my hand might have some practical value" does not entail that one holds that the piece of fabric probably has some practical value.

You need to brush up on those terms. Arbitrary and probable are both defined in a dictionary and I'm sure those definitions will do. To claim that something you don't know anything about might be true, you implicitly show that you are functioning on the faulty premise that anything goes; any figment of anyone's imagination might be true then, according to you. I never heard of 2 being defined as {{}, {{}}}, so I asked for a concrete example (proof) where it is practically applicable. You haven't done that, instead you inverted the principle of who has the burden of proof.

3. Anyway, that the formula has practical value is seen by the fact that it is an important part of the development of mathematics that led to and is used with the theory of computability. The author of the formula, John von Neumann is among the very most important  people (arguably, is the most important person) in the invention of the modern computer and computer programming. The set theoretic development of the number systems is an important part of the mathematical theory that led to the theory of computability and to building the first modern computers. Work at these levels of abstraction and complexity is not done in a theoretical vacuum. Certain formulations that  assist mathematicians in the rigor, clarity, and unification of their abstractions then contribute to the theory that is the intellectual reservoir upon which the applied sciences draw.
Still, I have never heard that 2 = {{}, {{}}}, or this formula's practical applications.

If the axioms and rules of inference are consistent, then there is no inconsistent application of them within a formal system. Or you have some special (or any) meaning of 'application' in mind that is relevant to the study of formal systems.

So when you are working within a formal system you can't make an error? Do you know what consistency is? It means that your axioms don't contradict each other, and that your formulas don't contradict with your axioms. You can be inconsistent at any point of building a system, not only during the axiomatization.

Not just remembered, but used every single day, including today, by thousands and thousands of mathematicians, as the standard set theoretical definition.

And the application is...

Link to comment
Share on other sites

Sure.

Then you've committed to providing a system for which there is an algorithm to decide whether a sequence of formulas is a proof. This entails that there are algorithms to decide (1) whether an expression is a well formed formula, (2) whether an expression is an axiom, (3) and whether each entry in a sequence follows by a rule of inference.

The only part of the currently mainstream mathematics I reject is the lack of an (explicit) objective foundation

Whether mathematics provides an objective foundation in an Objectivist sense is one matter, but mathematics does provide the objectivity of the existence of algorithms for deciding whether arguments are proofs. Now, if you propose to meet Objectivist criteria as well, then fine, but if yours is to be a formal system, then you must meet the criterion of the existence of effective methods also.

and (in some cases, as with the natural numbers) the use of a rationalistic foundation as a substitute for it.

Again, you are ascribing philosophical commitments that you have not demonstrated that mathematics has. You fail to see how the axiomatizations need not be taken as means of describing but instead as means of "coding". The codifications are means to arrive at number systems that are isomorphic with the way both you and I think about these numbers in their more everyday sense. What you miss understanding is that "coding" these everyday senses into formal terms so that they are also descriptions is not so easily done.

Here's another analogy (not meant as an argument, but as a way of conveying to you what you're missing): Suppose we have a database of employees in a company. Then each record card that we look at is a "profile" or a kind of "description" of each employee, as each record tells us about some essential property of the employee's employment with the company. For example, age is an essential property in this context, since age will figure into the employee's retirement date. But we also notice that each employee's record includes a database index number that is a code for each employee. We say, "What's this? There is nothing about John Parker that has anything to do with the number 8000947683. This code for John Parker with the number 8000947683 is arbitrary." Do we then hold that therefore the database is not reality based? Of course not.

Similarly, the way mathematical objects, such as numbers, are "coded" by set theory does not need to be taken as an attempt at a description of the objects. Instead, the "coding" of the objects provides for a system or relations such that the system is isomorphic with the number systems as they are given outside of set theory. For example, the essential properties of natural numbers include properties such as: (1) each natural number, except 0, follows some previous natural number by addition of 1, (2) multiplication of natural numbers is commutative, etc. And the number systems that set theory provide do uphold all the properties we expect for these number systems. That is the point of set theoretical developments: to uphold the essential properties of the number systems while also unifying them so that we don't need a separate axiom system for each one while we can talk about functions and other comparisons among them - all within one theory. But to do this, we may find that our initial "codings" have to take certain rather odd routes to the objective, only because more direct routes have not been discovered. So, if you can devise a more direct route, then great. But until you do, your objections are not refutations of routes that, in the meanwhile, at least do get us to our objectives.

Once I have reproduced the axioms of an existing theory, either as objective axioms or as theorems, the rest of the theory follows automatically.

I'm not sure what you have in mind here, but it is true that if the axioms of a theory T are a subset of the theorems of your theory S, then T is a subset of S.

However, if S adds axioms to T, then S may be inconsistent even if T is consistent.

Anyway, if you object to set theory, then I don't see why you would even consider "reproducing" its axioms either as axioms or theorems. And, if by "objective axioms" you mean axioms in the sense of Objectivist axioms (those that cannot be denied without the denial including an affirmation of them), then I think you do not appreciate the combined criteria here. First, the Objectivist notion of axiom uses, or is at least quite akin to, refutation by pointing out the fallacy of stolen concept. Now, either: (1) using a stolen concept entails self-contradiction or (2) stolen concept is a form of reductio ad absurdum, in which case it does not necessarily entail self-contradiction. With (1), your project is a form of logicism, since you will have demanded that the negation of any of your axioms is a self-contradiction, hence your axioms are logically true. With (2), I don't see how your project is distinguishable as an Objectivist one.

Granted, Objectivism rejects the distinction between logically true and contingently true. However, at least as far as first order predicate logic is concerned, the distinction is pretty much built into the semantics of the formal system. Thus, as long as you use first order predicate logic (which, of course, you are not obligated to do), your axioms are thereby subject to this distinction.

Thanks to Ayn Rand and Objectivist epistemology, I have ample knowledge on what reality-orientation means, so that should be no problem.

You need more than your own understanding of Objectivist epistemology, since you also need to show how, and even why, it pertains to formal systems, and that your own formal system upholds the criteria of both Objectivism and those of formal systems.

But I have to stress again that this something that necessarily has to be done before the formal system is created, and therefore outside it.

Without committing to the way you've couched this, I basically agree. Your project divides into two parts: (1) Creating a formal system and (2) discussing the formal system, from outside the formal system, in terms of Objectivism and formal systems.

However, you should also be aware that mathematics is capable of formalizing the outside discussion itself. It might not be required that you also be capable of that, but at least you should be know that mathematics has achieved this intellectual rigor.

Edited by LauricAcid
Link to comment
Share on other sites

WI_Rifleman, curse you for this plague :confused:

I'm still in that "0.999repeating does not equal 1" school.

Doesn't 9 * 0.999repeating equal 8.999repeating, like Bryan said in post #4?

That, and his coffee example in #18, would lead to saying 0.999repeating does not equal one, wouldn't it??

Link to comment
Share on other sites

Are you joking?

It is for whomever makes an assertion to support that assertion. Rarely are things that simple.

Arbitrary and probable are both defined in a dictionary and I'm sure those definitions will do.

Both words have at least a few dictionary definitions and senses. Why don't you specify exactly which definitions and senses you have in mind. Then we can discuss whether, according to those senses, your claim is true that to hold that something might be true is to hold that it probably is true.

To claim that something you don't know anything about might be true

Who said anything about not knowing anything about something? You claimed that there is no practical value in a certain formula. And I simply asked how you know that. How do you arrive at the sweeping generalization that the formula has no practical value? It's one thing to assert, quite reasonably, that you do not know of, or do not see, any practical value in a particular formula. And it might be reasonable in some contexts to assert that it is improbable that a particular formula has practical value. But it's another thing to assert as a unqualified generalization, as you did, that a particular formula does not have any practical value.

You skipped responding to my illustration. I'll give it again: If I'm holding a piece of fabric in my hand, then on what basis would you assert that it has no practical value? Maybe it doesn't have practical value, but maybe it does have practical value. But if I assert that it might have practical value, then I am not asserting that it probably has practical value.

And if I'm pointing to a formula on a chalkboard, and you say, "That formula has no practical value," then what is the basis of your assertion? And yours is a different assertion from asserting that you don't know or cannot see any practical value in the formula. And even if we knew nothing about the formula but we walked into an empty classroom with the formula written on the chalkboard. The formula might be nonsense, just written as a joke or written just to test how well a certain brand chalk performs on this kind of blackboard. But the formula might be a piece of a larger theory that tells how to make an automobile that runs on half the fuel. If you said, "That formula has no practical value," then a reasonable response is, "How do you know that?" And to say that the formula might have practical value is not saying that the formula probably does, but only that it might.

you implicitly show that you are functioning on the faulty premise that anything goes; any figment of anyone's imagination might be true then, according to you.

No, you've committed a non sequitur. By saying that a certain thing might be true does not entail that one thinks that anything might be true. If I assert that the train might arrive late does not entail that I hold that two might equal four.

You claimed the proposition that the formula has no practical value. The burden of proof for that claim is on you for that proposition. If I claim that the formula does have practical value, then the burden of proof is on me for that proposition. And if I claim that the formula might have practical value, the burden, though it be far less, is also on me.

But the actual dialogue here, as to the particular formula, was that you first made the claim that the formula has no practical value. Thus, at that juncture arises the reasonable question: How do you know that the formula has no practical value? You may bounce that back by saying, "Well, what is it's practical value?" And that's fair enough. But by so doing so, you have not answered the question of how you know that it does not. And even if one could not answer to say what practical value a piece of fabric or a formula on a blackboard might have, we still cannot infer that the piece of fabric or formula on a blackboard has no practical value, since man's knowledge, even on an Objectivist evaluation, is always limited and ever growing. We may not know of a practical use for a particular piece of fabric or formula on a chalkboard, but that does not entail that we might not ever know of a practical use for them, and especially does not entail that other people might not have a use for them. Otherwise, would be to claim empirical knowledge beyond our own empirical experience. This is not to be conflated with rebutting things like, "How do you know god does not exist?" Such a question is not an empirical one, and indeed is formed from undefined abstractions. But asking how you know that a particular piece of fabric or a particular sequence of symbols have no use is not asking about an undefined abstraction, but rather is asking about the empirical question of whether these things have use.

And, your original context was that formulations that are useless in practice will not survive and will be thrown into the junk. And I met that remark by pointing to the fact that the very formula we're talking about is an extremely important formula in a theory that not only has survived but has thrived for nearly a hundred years now as it continues to be the standard theory both for the development of mathematics and for the meta-theory of mathematical logic.

Moreover, since you've stated that formulations that have no practical use will not survive, then, by your own premise, you've attested to the practical use of the particular formula in question. By your own premise, and modus tollens, since the formula has survived (and as an important part of mathematics, too), it has practical use.

On the other hand, one should doubt your premise. It might be that formulas that don't have practical use still survive, since the formulas have abstract and theoretical appeal. And I think that in the case of the formula under discussion, it's abstract and theoretical appeal, as an important part of a systemization that illumines and makes rigorous the abstractions that are not necessarily themselves technological solutions or even intended as such but are important conceptualizations that feed applied mathematics and sciences.

I never heard of 2 being defined as {{}, {{}}}

Then you've never studied the subject.

so I asked for a concrete example (proof) where it is practically applicable.

If you mean for me to show you a particular machine or piece of technology that uses that formula in the design, then I don't know of such a thing, especially since I've never looked for such a thing. But I did give you an answer as to the overall usefulness of this formula as part of a mathematical theory and mathematical thinking that has fed, probably more than anything else has, into the invention of modern computing. Morevover, one just has to look at how computer programming to see how mathematics, including, and especially, set theory, recursion theory, and computability, have enabled computer programming, Anyway, again, my initial point was not even in showing that the formula does have use, but rather to ask you how you had arrived at the conclusion that it does not.

So when you are working within a formal system you can't make an error?

Fair enough, if that's what you meant. However, one doesn't usually talk about formal systems that way. Of course, one recognizes that there is human error. But human error is more a subject of psychology than of mathematics. In the mathematical sense, once you've erred as to the formal system, you're no longer working in the formal system. In other words, the formal system is what it is, regardless of whether we make errors using it. Therefore, any errors we make are not applications of the system, but rather are activities not even in the system, since to apply the system means to apply the system as it is a system; it does not mean to apply the system with errors we've added that are not part of the system.

However, I grant that this is a fine point, and that as you've explained your sense, at least your particular point about the consistency of systems is well taken.

Do you know what consistency is?

Oh please, what a blustery challenge. Not only do I know what consistency is, but I can give you a rigorous mathematical formulation of the principle, and discuss several important theorems about the relations among consistency, satisfiability, provability, cardinality, number systems, etc. Not that I'm such an expert on the subject, but I know enough about it to make your challenge foolish.

Edited by LauricAcid
Link to comment
Share on other sites

I'm still in that "0.999repeating does not equal 1" school.

I gave a proof a few posts back. What is it you do not understand or take exception to in that proof?

Doesn't 9 * 0.999repeating equal 8.999repeating, like Bryan said in post #4?

Yes, 9(.999...) = 8.999...

But #4 is incorrect to deny that, in #1, 9x = 9.

Morevover, even if #4 refuted #1, this does not entail that #4 has proven that .999... not= 1. Anyway, #4 does not refute #1. (Though I'm not inclined to defend #1, since it relies upon manipulations of infinite sequences in a way that I haven't verfified for myself is justified.)

That, and his coffee example in #18, would lead to saying 0.999repeating does not equal one, wouldn't it??

No, since #4 is dispensed as I just mentioned, and #18 is also fallacious. From that post:

The infinite series (0.9 + 0.09 + 0.009 + ...) is just a big long list of 9s if you actually calculate the sum of the series.

No, if you actually calculate the limit of the series, then the limit, which BY DEFINITION is what .999... is, is 1.

.999~ is theoretically the largest number less than one, but it doesn't actually exist in reality.

Whatever is meant here by 'reality', it is correct that there is no largest real number less than 1. But .999... is NOT theoretically, or in any other way, the largest real number less than 1. To state that .999... is theoretically the largest real number less than 1 is to reveal a fundamental ignorance about this subject. Again, as I said, .999... is the limit of a sequence. This limit is not theoretically, or otherwise, a non-existent largest number less than 1; rather the limit IS 1.

Let's pretend you have 1 cup of coffee.  You take the smallest sip of it that you possibly can.  You now have less coffee in the cup than you had before.  We'll say that you now have .999~ cups of coffee.

But in reality, no matter how small of a sip you took, you still removed a measurable amount of coffee from the cup.  You can't take an infinitely small amount of coffee out of the cup, which is why .999~ doesn't actually exist.

That there is no smallest possible sip is exactly why .999... = 1. The poster has it backwards: .999... exists all right, as it is the limit of a sequence and that limit is 1; and this relies upon, rather than contradicts, that there is no "smallest sip" or smallest real number to subtract from 1.

Edited by LauricAcid
Link to comment
Share on other sites

Capitalism Forever,

I just thought of a good example to illustrate my point:

In some programming languages, we have statements, regarding integers such as:

X = X + 1

Now, if we try to relate this quite literally to our empirical reality, we have to think, forget about arbitrary, this is flat out contradictory! X never equals some number plus itself.

But in the context of the programming language, the formula makes perfect sense. It "encodes" a mathematical step, using incrementatin on a variable, but if you take the formula in a certain literal sense, then it is plain wrong. The point being that the programming language works in its own way, not in the literal way of asserting that a number plus 1 is equal to itself.

Along these lines, set theory "codes up" numbers themselves in, admittedly, rather odd ways. But this "coding" should be taken in its own context and not necessarily as literal descriptions.

Edited by LauricAcid
Link to comment
Share on other sites

I gave a proof a few posts back. What is it you do not understand or take exception to in that proof?

Which post #?

But #4 is incorrect to deny that, in #1, 9x = 9.

But if 9 * 0.999repeating = 9x

and 9 * 0.999repeating = 8.999repeating

then 9x = 8.999repeating, not 9. That still doesn't prove that either 0.999repeating = 1, or 8.999repeating = 9.

I'm working on a full argument, but at this point, 0.999repeating can't be said to equal 1.

No, if you actually calculate the limit of the series, then the limit, which BY DEFINITION is what .999... is, is 1.

The problem is that he didn't say the limit of 0.999repeating = 1, which everyone would agree with. The problem isn't one of limits, though many have used them and IMO thus come up with a false answer.

Link to comment
Share on other sites

Here's a proof:

That's from #78.

then 9x = 8.999repeating, not 9.

No, you're overlooking that BOTH these statements can be true:

9x = 8.999

9x = 9

That still doesn't prove that either 0.999repeating = 1, or 8.999repeating = 9.

As long as one accepts manipulating infinite sequences as in #1, then #1 does prove that .999 = 1, and we can include that 8.999... = 9. Meanwhile, #78 proves this without the assumption that we can manipulate infinite sequences as in #1.

I'm working on a full argument, but at this point, 0.999repeating can't be said to equal 1.

If your argument is to be relevent, it must not contradict that .999... is the limit of the infinite sequence 9/10, 99/100, 999/1000.

(Come to think of it, it may not even be incorrect to call this a definition. It's a theorem that follows from a general notational convention, in a sense a definition, as to infinite decimal expansions.)

Whatever you might mean by .999..., if it's not what mathematicians mean, then fine, you've proved something about some personal system of yours. This is not at issue.

But you say, "at this point, 0.999repeating can't be said to equal 1."

Please, in mathematics, an infinite decimal expansion is a notation for the limit of a sequence of sums. And, all that mathematicians mean by saying that it is proven that .999... = 1 is that it is proven that the limit of the sequence is 1. And I posted a proof of this.

The problem isn't one of limits, though many have used them and IMO thus come up with a false answer.

The answer is correct given what mathematicians take the notation of infinite decimal expansions to represent. Your coming up with a different system of understanding this notation irrelevant. However, if you have a different theory and defininitions, then, by all means, you're welcome to it, and you're welcome to present it. But to give a theory is to state a logistic system, axioms, primitives, and definitions. Morevover, if you want your theory to receive any attention, then one would expect that your theory offer some improvement over, or at least some attractive features, over existing theories.

Edited by LauricAcid
Link to comment
Share on other sites

I've almost worked all this through, but a quick reply to #78:

Definition: .999... = lim(k = 1 to inf) SUM(j = 1 to k) 9/(10^j)

You've made 0.999repeating into a limit, which is neither necessary nor a stated part of the original question.

0.999repeated, as originally used (i.e. not a limit) is an infinite series, which is not the same thing as a limit.

Like I said, if you (incorrectly) change the problem into a limit one, then you get 1. On the other hand, if you use infinite series, you don't get 0.999repeating equaling one.

Link to comment
Share on other sites

Okay the problem, as initially used would mean:

0.999repeating equals the sum of 9/10^x, where x goes from 1 to infinity.

This question has been rephrased to mean:

0.999repeating equals the limit of the sum of 9/10^x, where x goes from 1 to infinity.

Just because the infinite series uses infinity doesn't make it a limit.

Now if you take infinite series, a second error occured regularly:

x = 0.999repeating

10 * x = 10 * 0.999repeating

10x = 10 * 0.999repeating

10x - x = 10 * 0.999repeating - 0.999repeating

9x = 10 * 0.999repeating - 0.999repeating

9x = 9.99repeating - 0.999repeating

:lol: 9.99repeating - 0.999repeating = 9 :P

That's the error. The reasoning, I suppose is that

10 * 0.999 repeating - 0.999repeating equals

[ 9 + {sum of 9/10^x, x from 1 to infinity} ] - [sum of 9/10^x, x from 1 to infinity]. It doesn't, AFAIK.

Link to comment
Share on other sites

Depending on how you arrange the series, you get different answers.

Take [ 9 + {sum of 9/10^x, x from 1 to infinity} ] - [sum of 9/10^x, x from 1 to infinity]:

(@first x values) 9 + 0.9 - 0.9 = 9

(@second x values) 9 + 0.09 - 0.09 = 9 ad infinitum = 9

That does equal nine. But you arbitrarily took the 9 out of the series, and that has an effect on the answer.

The better way to do it would be to not arbitrarily take values out of the series. At any rate, if putting the 9 outside is valid, it should still give the same value as taking nothing out, or taking any other arbitrary value out.

[sum of 9/10^x, x from 0 to infinity] - [sum of 9/10^x, x from 1 to infinity].

(@first x values) 9 - 0.9 = 8.1

(@second x values) 8.1 + 0.9 - 0.09 = 8.91 ad infinitum to 8.999...9991 = 8.999repeating

[9.99 + {sum of 9/10^x, x from 3 to infinity} ] - [sum of 9/10^x, x from 1 to infinity]:

(@first x values) 9.99 + 0.009 - 0.9 = 9.099

(@ second x values) 9.099 + 0.0009 - 0.09 = 9.0099 ad infinitum to 9.000...00099

Granted, the limit for

9

8.999repeating

9.000...00099

= 9 for all of these. But the problem isn't a limit one, and none of those numbers equals the others.

On a similar note, the limit of 0.333repeating = 1/3, but 0.333repeating doesn't equal 1/3, I believe. I too could have mistaken, though :lol:

Link to comment
Share on other sites

[Note: posted before seeing the previous post.]

If the stated problem does not include that an infinite decimal expansion is notation for a limit, then the stated problem is in disregard of the basic mathematics.

Fine, if you think the matter is worked out in some other theory or by some other notion of what an infinite decimal notation stands for, then your remarks about the question of this thread regard this other, unstated, theory that you have somewhere in your head while other people in also disregard of the mathematics of this matter have their own unstated theories somewhere in their heads. Yes, as to working out the matter in that context, I have nothing to contibute. You may argue among yourselves to your intellects' delight about your mutually contradicting, unstated theories with their unstated axioms, unstated prmitives, and undefined terms.

You wrote, "0.999repeated, as originally used (i.e. not a limit) is an infinite series, which is not the same thing as a limit."

But real numbers are not infinite sequences, while they are limits of infinite sequences. (They can be equivalence classes of infinite sequences, but that's not to say that they are sequences themselves.)

Admittedly, in informal presentations, there is some slippage in terminology even among mathematicians. But here's one terminology:

An infinite sequence is a function from the the set of natural numbers.

A real valued sequence is sequence into the set of real numbers.

An infinite series (the only term here that has some terminological slippage) is an infinite sequence given by SUM[k = 1 to n] f(k) for some infinite sequence f. (I took the liberty of using 1 instead of 0 as the base, which is not important.)

Theorem: If b is an infinite sequence into the set {0, 1, 2, 3, 4, 5, 6, 7, 8 , 9}, then the real valued sequence f (for convenience, starting on 1), defined by f(k) = b(k)/(^k), converges to a real number.

Notational convention: Given b and f as above, the decimal expansion: b(1) b(2) b(3) ... stands for the real number that f converges to.

You wrote, "Like I said, if you (incorrectly) change the problem into a limit one, then you get 1. On the other hand, if you use infinite series, you don't get 0.999repeating equaling one."

No, if you correctly identify the mathematics behind such notation and problems, then you correctly prove that .999... = 1. On the other hand, you just asserted that .999... not= 1, and you haven't even give a proof; moreover, you haven't even given a definition of .999... that makes sense, as well as you haven't even stated any mathematical axioms, primitives, or definitions for anything you've said.

You wrote, "Thus the ultimate problem is you don't get 0.999repeating = 1 from infinite series, only if you change the problem to a limit one."

I think you mean not 'don't' but rather 'do' in that sentence. And, again, if you are speaking outside the context of limits, then you haven't said in what sense an infinite sequence is a real number. And, you've just reiterated your assertion yet again, without even a hint of a proof (which would be meaningless were you to give it anyway, since you haven't stated axioms, prmitives, or definitions).

Edited by LauricAcid
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...