Nate Posted December 8, 2006 Report Share Posted December 8, 2006 (edited) http://www.bbc.co.uk/berkshire/content/art...o_feature.shtml Make up a symbol for an existing concept with no new information and call it a solution. Edit: Am I missing something here? Why does this matter? Edited December 8, 2006 by Nate Quote Link to comment Share on other sites More sharing options...
Nate T. Posted December 8, 2006 Report Share Posted December 8, 2006 As far as I can tell, he's basically adjoining a new element to the extended reals and saying that anything you do to it at all returns that same symbol again. I don't see how assigning it the fancy name of "nullity" is different from simply calling such expressions indeterminate, either-- after all, anything you do to an indeterminate expression will return an indeterminate expression too, since if this weren't the case you could simply undo the operations and get that you started with something determinate. I don't particularly like his "practical" reasons for introducing this symbol. L'Hopital's rule already handles indeterminate cases fine when they arise as limits, and "the pacemaker will stop if it divides by zero" argument is pretty dumb too, since you're going to have to write exceptions in your code to handle this "nullity" thing too. Quote Link to comment Share on other sites More sharing options...
Qwertz Posted December 8, 2006 Report Share Posted December 8, 2006 "The pacemaker will stop if it divides by zero?!" I don't think the pacemaker is going to divide by zero. I don't think pacemakers run on Windows, or have math coprocessors, or in fact are designed to be able to do anything they don't need to do (like divide). Could you imagine? The retail release of my pacemaker was buggy, so I need a firmware update. The very idea! Besides, attaching a new name and a new symbol to the concept doesn't solve the problem. Even with this lovely new word and symbol, you still can't divide by zero. -Q Quote Link to comment Share on other sites More sharing options...
DavidOdden Posted December 8, 2006 Report Share Posted December 8, 2006 Make up a symbol for an existing concept with no new information and call it a solution.No, I think it's worse than that, since this is hiding contradictions or redefinitions, and lying about the utility. First off, the problem is almost 1400 years old, not 1200, having been treated in the Brahmasphutasiddhanta. Second, this stupid symbol of his is not a real number. Third, as every software nerd knows , only an idiot allows for a divide by zero. Fourth, England is in deep trouble if this represents the level of their mathematical achievement. Fifth, I'd just love to see the hardware implementation of this new "number" -- how do we reduce it to a pattern of binary bits? Should we now have a new "surreal" bit for storing numbers like his "nullity"? Quote Link to comment Share on other sites More sharing options...
y_feldblum Posted December 8, 2006 Report Share Posted December 8, 2006 It's absolutely stupid. If at any point one comes across 0/0, what one already does is either to take the limit or say "there is no answer", and both, in their contexts, are perfectly good solutions. Quote Link to comment Share on other sites More sharing options...
Robert J. Kolker Posted May 17, 2007 Report Share Posted May 17, 2007 No, I think it's worse than that, since this is hiding contradictions or redefinitions, and lying about the utility. First off, the problem is almost 1400 years old, not 1200, having been treated in the Brahmasphutasiddhanta. Second, this stupid symbol of his is not a real number. Third, as every software nerd knows , only an idiot allows for a divide by zero. Fourth, England is in deep trouble if this represents the level of their mathematical achievement. Fifth, I'd just love to see the hardware implementation of this new "number" -- how do we reduce it to a pattern of binary bits? Should we now have a new "surreal" bit for storing numbers like his "nullity"? We already do. It is called theory of hyperreals or non-standard analysis. It was invented in the late 1950's by Abraham Robinson to give a rigorous justification to the method of infinitesimals used by Newton and Leibniz*. The "nullity" is a hyperreal that is smaller than any (ordinary) real number but is not zero. I know it sounds strange, but darned if it doesn't work. To learn more about this subject in a fairly gentle way I would suggest looking at -Nonstandard Analysis- by Alain M. Robert. It is available as a Doverbook and is not too expensive. There is a more rigorous approach that uses fancy stuff like ultrafilters, but that is strictly for the professional mathematicians. *The method of infinitesimals was savaged rather adroitly by Bishop George Berkeley (yes that Berkeley!) in his essay -The Analyst-. Berkeley's attack on infinitesimals was so on target that mathematicians were forced to invent the concept of -limit- in the early 19th century to put calculus and infinite series on a solid logical foundation. But infinitesimals worked! Robinson made infinitesimals logically respectable in his theory of non-standard analysis. Bob Kolker Quote Link to comment Share on other sites More sharing options...
DavidOdden Posted May 17, 2007 Report Share Posted May 17, 2007 We already do. What hardware implements this lunacy? Quote Link to comment Share on other sites More sharing options...
D'kian Posted May 17, 2007 Report Share Posted May 17, 2007 Fifth, I'd just love to see the hardware implementation of this new "number" -- how do we reduce it to a pattern of binary bits? Should we now have a new "surreal" bit for storing numbers like his "nullity"? Might it not be represented by virtual bits? BTW, has there ever been a plane crash, a pacemaker failure, a busted microwave or even a missed show taping on a VCR because for some reason a division by zero came up? Ever? Quote Link to comment Share on other sites More sharing options...
softwareNerd Posted May 17, 2007 Report Share Posted May 17, 2007 Most databases can represent a number as being null (i.e. unknown). This is sometimes implemented by having a separate "indicator variable" for each variable that may be null. In other representations, an absence (of something, e.g. a pointer, a field #, etc.) is taken to mean it is unknown. Suppose one has a file with information about 100 people, and only two fields:(NAME, AGE). Also, suppose AGE can be null, then: Querying for all people WHERE AGE > 18 might list 60 people, Querying for all people WHERE AGE <= 18 might list 35 people Querying for all people WHERE AGE is null might list 5 people Nulls were introduced to solve a problem, but also ended up being the source of new problems. In most implementations, if any operand is unknown/null, the result of the operation is unknown/null. So, (AGE / 2) will be null/unknown whenever AGE is unknown. (The term "three-valued logic" is commonly used to describe the true/false/unknown outcomes.) E.F.Codd, who was a key figure in database theory actually criticized the way nulls/unknowns had been implemented; but, his revised suggestion was to move to 4-valued logic: (true, false, unknown, inapplicable). [wiki ref] Quote Link to comment Share on other sites More sharing options...
Robert J. Kolker Posted May 18, 2007 Report Share Posted May 18, 2007 What hardware implements this lunacy? The theory of hyper-reals is an abstract mathematical theory. It is not a computational algorithm. Bob Kolker Quote Link to comment Share on other sites More sharing options...
Chops Posted May 19, 2007 Report Share Posted May 19, 2007 What practical purpose would something like this serve. For example, imaginary numbers don't exist, but they serve a practical purpose in solving certain equations. Does this do anything of any value, or is it just an arbitrary symbol applied to a random unsolv(ed/able) problem? On the surface, it seems arbitrary, but I am no mathematician. Quote Link to comment Share on other sites More sharing options...
Robert J. Kolker Posted May 19, 2007 Report Share Posted May 19, 2007 What practical purpose would something like this serve. For example, imaginary numbers don't exist, but they serve a practical purpose in solving certain equations. Does this do anything of any value, or is it just an arbitrary symbol applied to a random unsolv(ed/able) problem? On the surface, it seems arbitrary, but I am no mathematician. Complex numbers exist to exactly the same extent as so called real numbers, rational numbers and integers exist. Which is to say they have no physical existence whatsoever. However, they are used, as you point out to solve equations. Here is an essay by the famous physicists Eugene Wigner you might find interesting: http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html Bob Kolker Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.