## Why Do Computers Suck at Math?

### May 13, 2009

You've probably seen this old chestnut by now.

Insert your own joke here. Google can't be wrong -- math is! But Google is hardly alone; this is just another example in a long and storied history of obscure little computer math errors that go way back, such as this bug report from Windows 3.0.

1. Start Calculator.
2. Input the largest number to subtract first (for example, 12.52).
3. Press the MINUS SIGN (-) key on the numeric keypad.
4. Input the smaller number that is one unit lower in the decimal portion (for example, 12.51).
5. Press the EQUAL SIGN (=) key on the numeric keypad.

On my virtual machine, 12.52 - 12.51 on Ye Olde Windows Calculator indeed results in 0.00.

And then there was the famous Excel bug.

If you have Excel 2007 installed, try this: Multiply 850 by 77.1 in Excel.

One way to do this is to type "=850*77.1" (without the quotes) into a cell. The correct answer is 65,535. However, Excel 2007 displays a result of 100,000.

At this point, you might be a little perplexed, as computers are supposed to be pretty good at this math stuff. What gives? How is it possible to produce such blatantly incorrect results from seemingly trivial calculations? Should we even be trusting our computers to do math at all?

Well, numbers are harder to represent on computers than you might think:

A standard floating point number has roughly 16 decimal places of precision and a maximum value on the order of 10308, a 1 followed by 308 zeros. (According to IEEE standard 754, the typical floating point implementation.)

Sixteen decimal places is a lot. Hardly any measured quantity is known to anywhere near that much precision. For example, the constant in Newton's Law of Gravity is only known to four significant figures. The charge of an electron is known to 11 significant figures, much more precision than Newton's gravitational constant, but still less than a floating point number. So when are 16 figures not enough? One problem area is subtraction. The other elementary operations -- addition, multiplication, division -- are very accurate. As long as you don't overflow or underflow, these operations often produce results that are correct to the last bit. But subtraction can be anywhere from exact to completely inaccurate. If two numbers agree to n figures, you can lose up to n figures of precision in their subtraction. This problem can show up unexpectedly in the middle of other calculations.

Number precision is a funny thing; did you know that an infinitely repeating sequence of 0.999.. is equal to one?

In mathematics, the repeating decimal 0.999Ãƒâ€“ denotes a real number equal to one. In other words: the notations 0.999Ãƒâ€“ and 1 actually represent the same real number.

This equality has long been accepted by professional mathematicians and taught in textbooks. Proofs have been formulated with varying degrees of mathematical rigour, taking into account preferred development of the real numbers, background assumptions, historical context, and target audience.

Computers are awesome, yes, but they aren't infinite.. yet. So any prospects of storing any infinitely repeating number on them are dim at best. The best we can do is work with approximations at varying levels of precision that are "good enough", where "good enough" depends on what you're doing, and how you're doing it. And it's complicated to get right.

Which brings me to What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.

What do the Google, Windows, and Excel (pdf) math errors have in common? They're all related to number precision approximation issues. Google doesn't think it's important enough to fix. They're probably right. But some mathematical rounding errors can be a bit more serious.

Interestingly, the launch failure of the Ariane 5 rocket, which exploded 37 seconds after liftoff on June 4, 1996, occurred because of a software error that resulted from converting a 64-bit floating point number to a 16-bit integer. The value of the floating point number happened to be larger than could be represented by a 16-bit integer. The overflow wasn't handled properly, and in response, the computer cleared its memory. The memory dump was interpreted by the rocket as instructions to its rocket nozzles, and an explosion resulted.

I'm starting to believe that it's not the computers that suck at math, but the people programming those computers. I know I'm living proof of that.

Posted by Jeff Atwood

@the old [o]rang:

A simple way (very simplified) is to say that 0.99999... is so close to 1.0, that I wish not to have to print all them darned nines, and for my purposes I will round it off to 1.0, so I don't have to spend all day writing out senseless nines. The precision of avoiding the incredibly small differences is not worth the effort, since each space is 1/10 the size of the previous.

At the risk of enabling a troll, let me correct you.

0.999... is not *pretty close to* one. It is one. Not by convention, not by habit, not by agreement, not by tradition, not by laziness. The problem here, I think, is that you are ignoring what that ... means. It doesn't mean a whole bunch more, nor an arbitrarily large number of. It means an *infinite* number of. It means they go on forever. Not till you get tired of writing them, or holding down the 9 key. *For*. *Ever*.

What's more, your rambling about positive zero and negative zero, about how computers can only add, the rest is just tricks, and computers can't do anything but binary make me think you're a representative from the Time Cube organization. Look up bignums and BCD and get back to us. (Or don't.)

Atario on May 14, 2009 11:32 AM

In general, to the commenters who are saying that Jeff is speaking to the wrong crowd...

It's good that you had an excellent computer science education. But you seem to be failing to account for the common nature of these issues. Maybe Jeff is saying things that have already been said, but as long as these mistakes keep happening these things bear repeating. At the very top of the article, it's observed that Google has made this common, easily-corrected error (I say easily-corrected because with Guido van Rossum in their organization somewhere, you'd think they could make the search engine's math output at least as good as the default handler in Python). If Google's screwing up something this simple, maybe it's not as obvious to the average programmer as we think it is?

So perhaps the people who write common and widely-used software should improve their computer science education. And perhaps the people who comment on these articles should (http://drupal.org/node/29405) help by contributing to successful projects like Drupal.

Maybe computer science prowess and popular, successful programs are semi-independent variables.

And maybe in a world where they are semi-independent, the people who know what they are doing should get up off the backs of the people who try to educate the rest of us.

Mark Tomczak on May 14, 2009 12:10 PM

The only problem I have with the .99999... = 1 is that give me N number of 9's after 0.9 and I can give you an infinite number of number between that number and 1. Not necessarily on a computer but in theory.

osp70 on May 14, 2009 12:16 PM

You know the old saying ... garbage in, garbage out

Don't blame the poor unknowing computer. It's just doing what it's told, and following the rules its given to produce a result.

someone on May 14, 2009 12:19 PM

Trickiest computer math gotcha I stumbled upon in reality: modulo repetition. Given a float x, think there's no difference between x%1 and x%1%1 (% being modulo operator)? Think again:

Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18)
[GCC 4.3.3] on linux2
x=-1e-20
x%1
1.0
x%1%1
0.0

aurelix on May 14, 2009 12:19 PM

Jolly good. If you see somebody using floating point for currency representation, they surely don't know what they're doing. Beware bad text-books.

Decimal numbers are coming back to computers. There is a modern IEEE-standard for them, and hardware support is said to be coming for this standard. Even the software versions are actually quite fast, but most importantly, they're correct. There are implementations for C and C++ as well.

Mike Cowlishaw made the programming language REXX which actually uses decimal numbers, and he's done some important work on decimal numbers and the IEEE-standard. He's also the man behind the JSR-13 for BigDecimal in Java. Densely Packed Decimal numbers are a variance of Chen-Ho encoded decimal numbers.

Some links on decimal arithmetic on computers for those who are interested:

Simply put: There is not much excuse for not doing math correctly on computers. I can understand why floating point arithmetic is wanted by Fortran/HPC/Science programmers, who at least for some systems need as much speed as they can get their hands on, but for anything else, decimal arithmetic is the way to go.

Anyway, it's a bliss to use a language which just does it correctly.

Dennis Decker Jensen on May 14, 2009 12:23 PM

@zokier

So if Java is like a house, then C++ is like a house without a kitchen sink?

Sounds about right to me.. ;)

jasonmray on May 14, 2009 12:24 PM

So...you've found a problem. What's the solution?

Practicality on May 14, 2009 12:45 PM

In mathematics, the repeating decimal 0.999Ö denotes a real number equal to one. In other words: the notations 0.999Ö and 1 actually represent the same real number.
This equality has long been accepted by professional mathematicians and taught in textbooks.

Really? Glad to know it gets taught; nobody taught that to *ME*. I swear, just a couple of months ago I was randomly thinking about periodic fractions and non-decimal bases (nothing better to think about... maybe that's why it took me so long to get married :-) ), and I stumbled upon this fact in total bewilderment. It was a clear, unmistakable and inescapable consequence of a few basic mathematical facts. How fun!

To me, the most interesting consequence is that our conventional numeric notation system (even with the use of ... or the vinculum sign) is not a biyective representation of the set of Reals, even though for the longest time I had assumed it was.

Euro Micelli on May 14, 2009 12:47 PM

@Craig Fritzpatrick

ìSo when we divide 9 by 2 for example, we as people might write: 4 1/2. Simple, a string of 5 characters including the space. No precision problems.î

Good luck when you want to calculate sqrt(2). Seriously, there are many rational classes, but I doubt any store them as strings.

@Jim

ìIt doesn't help that the real numbers are uncountably infinite, not merely countably infinite like the integers.î ... ì Any two different real numbers have an infinite number of other reals between them, so it's impossible to represent any nonempty segment of the real number line exactly.î

Thatís not just a problem with unaccountably infinite numbers; rationales are countable, but any two rationale numbers have an infinite number of rationales between them. But this isnít really the problem with representing numbers on computers ñ integers have just as much of a problem if they are sufficiently large.

Steve W on May 14, 2009 12:49 PM

Not related to floating point numbers: Stack Overflow has spoiled me. Everything on the internet needs to be able to be up-voted or down-voted. I read this article in Google Reader and for a split second was looking for the up-arrow to click on. Sharing or staring an article just doesn't feel the same as up-voting.

Scottie T on May 14, 2009 12:58 PM

@Daren:
The correct mathematical explanation is that 0.99999... (zero point nine recurring) approaches 1.

No, actually it has nothing to do with Limits. I know it looks like it, but it's not.

Here's a simplified version of what I stumbled upon:

1/3 = 0.33333...
2/3 = 0.66666...

Nothing weird there. Those two are clearly exactly equivalent. Now:

1/3 + 2/3 = 1

No possible doubt there. There are no limits or rounding involved. Therefore,

0.33333... + 0.66666... = 0.99999...

Which means that 0.99999... MUST therefore be an alternative (non-normalized) representation of 1.

Euro Micelli on May 14, 2009 1:00 PM

Ruby and Python rules too :) Just like the python:

[ruby]
~$irb irb(main):001:0 399999999999999-399999999999998 = 1 irb(main):002:0 exit [php] ~$ php -r 'echo 399999999999999-399999999999998 . \n;'
1
~$Sergei Kuznecov on May 14, 2009 1:09 PM I got the correct answer from excel 2007 when I tried =850*77.1 Norbert on May 14, 2009 1:09 PM The identity, 1 = 0.999..., simply indicates that we can represent the number that we call 'one' as two different power series, 0 + 9/10 + 9/100 + 9/1000 + ... and 1 + 0/10 + 0/100 + 0/1000 + ... (This is simply the definition of our decimal expansion notation; if the trailing coefficients of the series are repeating zeros, we omit writing them for convenience.) Put another way, the equation, 1 = 0.999..., just says that two different sequences (the partial sums of the two power series above) converge to one. Andrew on May 14, 2009 1:11 PM Python rulez ? 38.1 * .198 7.5438000000000009 .1 0.10000000000000001 http://docs.python.org/tutorial/floatingpoint.html I had the same problem on an e-commerce website in c#, and in perl/PHP/Tcl/C, and as everybody knows computers are not faulty, only developpers :) Normaly to avoid problem it is quite better not to work in BCD (binary coded decimal) float (unless your CPU compute natively numbers in decimal :) ), but rather to use fixed point trick. In real life it means manipulating only intergers (you don't store price as float, but integer representing tenth of cents for instance) and the point is just a matter of presentation. It hardly works all the time, but it is better than nothing. jul on May 14, 2009 1:13 PM Computers don't suck at math. People simply use floating point variables for purposes that floating point wasn't designed for. The technology to do true decimal arithmetic and return precise numbers has existed for more than 50 years, On business platforms, such as the IBM i platform, it's the default numeric data type. It's the right choice when you are working with money, weights, quantities and the other precise numbers used in business. Floating point was designed more for scientific applications or graphical applications. It wasn't designed for business. Strangely, most languages for the PC platform are lacking true decimal arithmetic, and the developers aren't clamoring for it. I've never understood that. How can floating point be good enough for your business? darkbagel on May 14, 2009 1:17 PM The excel bug is very confusing ... if you take that 850 * 77.1 and format the cell as a date you get the same value as if you have 65535 formatted as a date ... if you format it pretty much any other way you get the 100000 and 65535 ChrisHDog on May 14, 2009 1:20 PM But subtraction can be anywhere from exact to completely inaccurate. If two numbers agree to n figures, you can lose up to n figures of precision in their subtraction. This makes no sense to me, and neither this article nor the linked article attempts to explain it. Why should subtraction be harder than addition, multiplication, or division? I've tried thinking about it from various angles, and don't see why subtraction should introduce this kind of difficulty, and especially why the agreement of the two operands should have an effect on the precision of the results. Can you elaborate? Joe on May 14, 2009 1:41 PM The difference is: If a computer does a wrong calculation, it is caused by a bug in either hard of software. If a human calculates wrong, it is very likely caused by a mistake, lack of talent or too complicated math. Unless of cause that you define humans as bugs. Reminds me of how brilliant I thought I was, when javascript:parseInt('010') returned 8 instead of 10 . I have found a bug in javascript I thought. Only to find out it was because of the octal conversion. :) peter palludan on May 14, 2009 1:44 PM @jul I think you are confusing binary coded decimals (BCD) and binary floating point representation (IEEE 754): BCD is (was?) a technique often used in assembly code to store large numbers and perform precise arithmetic on them. Anders Sandvig on May 14, 2009 1:47 PM I really hate when some data type isn't good or large enough. When I do calculations, I am not interested in the datatype one bit. If the value is big, do you think I care? The computer should enlarge the variable to be able to hold the big value - automatically. Automation is the name of the game anyway. Or do you think it is reasonable that when a big value occurs, the computer whispers me hey, psst... hey programmer What? This is really embarassing, but could you kindly enlarge my variable? Oh for God's sake. Argh, alright then, but this will be the last time! Yes, goody goody goody! Thanks! Silvercode on May 14, 2009 1:56 PM These responses scare me very much. How could this article be written without a discussion of machine-epsilon? Steve on May 15, 2009 2:22 AM The desktop calculator designed by Dr. Larry Nylund is a complete replacement for the traditional calculator that comes with Microsoft Windows(c). It solves the issue of the floating-point problem. Download your own desktop calculator today, at almost no cost! http://www.math-solutions.org Lillian Travaglini on May 15, 2009 2:34 AM 1/3 = 0.333333333.... multiply both sides by 3 1 = 0.9999999..... don't be a d-bag jerk on May 15, 2009 2:38 AM To be clear, for all of those talking about languages that will handle this, and how PHP, lisp, Ruby, Python, etc. are all superior because they handle it, this isn't special. I'm not aware of any programming languages which would be unable to handle it. The integer in question requires 49 bits, what programming language still in use doesn't have 64-bit integers?... The problem only exists because Google is using single-precision floats for this, presumably because they are able to get things done faster that way on average for a typical query. If your language can pass this test while using single precision floats, THAT would be impressive... But it can't. I highly doubt Google even sees this as a bug of any sort, because most users aren't exactly using the Google calculator in ways that would make this any more than a novelty. The fact that everybody looks at 399999999999999-399999999999998 rather than 30347423581692-303474235816991 or something from a real world example that went bad is a testament to this. Tiak on May 15, 2009 2:42 AM [0.(9) equals 1] is false. [0.(9) does not equal 1] is also false. [0.(9) is probably equal to 1] is true. Probability is used to solve unsolvable problems. For 99.(9)% of our needs [0.(9) equals 1] is true. This includes engineering and applied mathematics. And it's for the simple fact that you have to decide on a level of precision (number of decimal places) or wait for eternity as the 9s roll out, and thus never get anything done. In theoretical physics [0.(9) does not equal 1] is true. Think big questions like the size of our finite universe and what's on the other side if it is finite. In this context, 0.(9) does not equal 1. It equals 42. :P Shane on May 15, 2009 3:15 AM shane is being a d-bag jerk on May 15, 2009 3:19 AM You should know that a) programming language a is slower/faster than b, b) programming language a is better/worst than b, c) 0.9 periodic vs 1 discusion (added right now), are tabu subjects. It's too late now. This will go on and on forever. AndrÈs Panitsch on May 15, 2009 4:29 AM you people need to learn numerical methods before commenting on this. numbers to a computer don't exist on a continuous line everything is discrete and not linearly space on that discrete number line. this sort of floating point error is a common occurrence for poorly written code chris on May 15, 2009 4:40 AM Wow, this is pretty pointless. Who gives a damn. Brian on May 15, 2009 4:41 AM and furthermore, Ihave tested and retested this theory with the calc in XP and I cannot recreate a math error with any calculation period no matter what level of precision or how many digits. Brian on May 15, 2009 4:43 AM @Lilian, Adamsson: From the site: ------------ Desktop calculator handles resulting floating-point values between 2.225E-308 (2-1022) and 1.797E+308 (2-1024). ------------- It IS floating-point. So stop spamming. @Brian: a href=http://www.codinghorror.com/blog/archives/001208.htmlhttp://www.codinghorror.com/blog/archives/001208.html/a">http://www.codinghorror.com/blog/archives/001208.html/a">http://www.codinghorror.com/blog/archives/001208.htmlhttp://www.codinghorror.com/blog/archives/001208.html/a XP uses now internal decimal math. TSK on May 15, 2009 6:51 AM No Wolfram|Alpha handle it properly. I guess it depends on the nature of the computer Juan Perez on May 15, 2009 6:52 AM There is problem with Daren's explanation 0.9999... = 1 because (10*0.999...)-0.999... = (10-1)*0.999... = 9*0.999... and (10*0.999...)-0.999... = 9.999... - 0.999... = 9 But, (10*0.999...(N-times)) - 0.999(N-times) = 9.999...((N-1)times) - 0.999(N-times) = 9.000...((N-1)-times)1 != 9 Thus 9 = 9*0.999... Arithmetics axioms say if x*y=x then y=1 (1 is the unique neutral element for * operation).}} confused on May 15, 2009 7:34 AM Based on the definition of decimal representation of real numbers, it's clear that 0.999... = 1. That's not what I have a problem with. Many posters, in making this claim, have appealed to the obvious notion that 0.333... = 1/3. My question is, if you don't accept that 0.999... = 1, then on what basis would you accept that 0.333... = 1/3? mkorman on May 15, 2009 8:15 AM Wolfram|Alpha uses at max only 66 decimal digits :( Tadeu on May 15, 2009 8:21 AM if you don't accept that 0.999... = 1, then on what basis would you accept that 0.333... = 1/3? I don't know why they don't accept 0.999... = 1, but there is a more intuitive reason why 1/3 = 0.333...; just try dividing 1 by 3 using long division. Steve W on May 15, 2009 8:49 AM Just some thoughts as a mathematician. Sorry about my English. There's a huge difference between INFINITE precision and ARBITRARY precision. An arbitrary-precision system is able to represent numbers at any FINITE precision, but not at INFINITE precision -- it would not be able to handle all real numbers even with infinite memory. Actually, it could handle hardly any real numbers. There's just too much of them. http://en.wikipedia.org/wiki/Cardinality About the 0.999... = 1 confusion: just forget all the proofs and equations. Just check the definitions of decimal notation and real numbers. lisaaKaljaa on May 15, 2009 8:52 AM I'd say that 0.333... does not equal 1/3. Its an approximation. JC on May 15, 2009 8:58 AM I'd say that 0.333... does not equal 1/3. Its an approximation. Well, it's not like it's an opinion. Either it does or it doesn't. And, it does. Again, by the *definition* of decimal representation. Look up this definition if you don't believe me. I don't know why they don't accept 0.999... = 1, but there is a more intuitive reason why 1/3 = 0.333...; just try dividing 1 by 3 using long division. But still, it's a leap to get from the finite iterative process of long division to an infinitely long number. I'd say that most people who are hazy on any of these facts simply don't understand what decimal notation even means. mkorman on May 15, 2009 9:06 AM I'd say that 0.333... does not equal 1/3. Its an approximation. So what is the margin of error? Steve W on May 15, 2009 9:07 AM Wow, that is like WAY cool! RT www.whos-watching.net.tc John Davis on May 15, 2009 9:09 AM But still, it's a leap to get from the finite iterative process of long division to an infinitely long number. But I think that if you work through the long division process, it should be obvious that you are going to be dividing 10 by 3 with ramainder 1 for ever. I mean if it's a finite process you would have to assume something is going to change to put an end to it. Steve W on May 15, 2009 9:10 AM But I think that if you work through the long division process, it should be obvious that you are going to be dividing 10 by 3 with ramainder 1 for ever. You're right, but once people are able to accept the idea that a decimal number is actually the limit of some sequence (whether or not the sequence contains the limit), then you've already won them. Most people seem to have trouble getting that far. mkorman on May 15, 2009 9:13 AM You simply need computers to represent 0.999...(recurring) differently that 0.999999999999999999999999 (full stop). That could be done via metadata, for example, or an extended bit field, a digital equivalent of having a bar over the digit in human readable terms. Of course, once you start down this road, you'll end up with an operating system, or at least a libm, that resembles Mathematica more than anything else (which may or may not be a good thing depending on your sensibilities). Jean-Michel Smith on May 15, 2009 9:15 AM Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click Dim o1 As Double = 399999999999999 Dim o2 As Double = 399999999999998 MsgBox(String.Format({0}-{1}={2}, o1, o2, o1 - o2)) '' 399999999999999-399999999999998=1 End Sub VB.NET wins. anders on May 15, 2009 9:15 AM RPGLE on the IBM AS/400 says it equals 1....Seems like Windows and Microsoft doesn't think anyone will calculate numbers that large...hmmmm d Result s 16 0 /free Result = 399999999999999 - 399999999999998 ; dsply result; *inlr=*on; return; Jeff on May 15, 2009 9:17 AM No, the gravitational constant is known to much more than 4 significant figures - 14, I believe. Ephilei on May 15, 2009 9:22 AM Listen. Anyone who says that .(9) is not 1 is mistaken or ignorant. There is no mathematical question. .(9) doesn't approach 1. Numbers don't approach. Variables do. Take some rudimentary calculus. .(9) IS 1. They are IDENTICAL in the real numbers. Take it from a mathematician. Daniel Kaplun on May 15, 2009 9:25 AM This is a perfect example of where discrete representations fail where true analog would have no problem. Just a thought :). Practicality on May 15, 2009 9:28 AM This really pisses me off. Why do you put your faith in the WRONG answer (yes, it is WRONG) when you have no logic to back it up? I feel that we should teach middle-school teachers that .(9) = 1 and explain why so that an 8th grader would understand. Then there wouldn't be this retarded farce. Daniel Kaplun on May 15, 2009 9:29 AM http://mathforum.org/dr.math/faq/faq.0.9999.html Daniel Kaplun on May 15, 2009 9:31 AM My XP machine running Office 2007 got the calculator and Excel problems right. I don't know what you're doing wrong buddy. Matt on May 15, 2009 9:43 AM You mean you didn't know IEEE representations???? DawnOfWar on May 15, 2009 9:44 AM It's not a math problem, or even a generic computer issue. It is a developer issue. The floating point format was developed for compact storage and fast processing on x87-type coprocessors when power was scarce and memory was too. The only legitimate way to perform any kind of business computation these days is to get rid of that encoding, realize that computers are a few million times more powerful now than they were then, and use ways to pack data that do not produce computational artifacts. Like the BCD encoding, for example. Which existed way before the x87 coprocessors. Louis-Eric on May 15, 2009 9:47 AM 0.9 (periodic) is equal to 1. That is mathematic and has nothign to do with cmputers. another representation is 9/9, and the difference between 0.999999999999999999999999999...9 to 1 is infinitisimal low. So both numbers are the same by math, not conventions (had a discussion with a dr in math phys a while ago about this topic ;-) ) DawnOfWar on May 15, 2009 9:49 AM wow. i didn't know computer will make mistakes. Steph on May 15, 2009 9:55 AM type bc -l in a shell... problem solved ;-) Alex on May 15, 2009 9:57 AM @Steve W When I went through university for comp sci (less than a decade ago), I took an entire semester-long course on dealing with computer representations of real numbers and the consequences of floating point arithmetic. Take heart! Cole on May 15, 2009 10:00 AM 0,9 periodic plus an infinitely small fraction equals one. Only if you assume infinitely small to be zero you can say both numbers are equal. It's a religious thing. Amorpheus on May 15, 2009 10:00 AM works correctly on kcalc under kubuntu 8.10. Mike McGinn on May 15, 2009 10:12 AM Hey does anyone know how to do quad-precision integers in C or 86 assembler? I spent a few weeks trying to do it and failed... P.S. 128 bit number. Steven Wagner on May 15, 2009 10:14 AM With the sheer number of posts on this article there sure are alot of math nerds out there! I'm not complaining, just point it out!! Nick on May 15, 2009 10:17 AM I talked to a Triple PHD in math who also worked at BK flipping burgers to cover a divorce... a number is a number is a number... they don't mean nothing till you say they do. If all you need is x Signif Digits... then cut it. Lame arguments for no reason. Computers do what you ask 'em to do. Ask them to do things corectly. MoveOn.org. And anyone who does something based on time and doesn't take pains to keep things synched... idiota. Hope one of them missles finds THEM. ShamWow on May 15, 2009 10:20 AM ST85 on May 15, 2009 10:21 AM Computers dont make mistakes..... They are perfectly logical and do exacly what they are told problem is they are sometimes told the wrong thing! SJenkins on May 15, 2009 10:30 AM I hereby comment on this particular post by [author]. There are two possibilities: 1. [author] is right. Any person that has previously pointed out the truth of the post has done so without necessity, as [author] stated an obvious fact and thus only wasted my time. Any person planning to point out the truth of the post after I did so should instead agree with me, giving proper credit to my intellect. Any person that has previously claimed that [author] was wrong should consider themselves responded to with alternative 2. 2. [author] is wrong. Any person that has previously pointed out the falsity of the post has done so without necessity, as the author is obviously incompetent (concerning this particular matter, and thus concerning any topic whatsoever) and only wasted my time. Any person planning to point out the falsity of the post after I did so should instead agree with me, giving proper credit to my intellect. Any person that has previously claimed that [author] was right should consider themselves responded to with this very paragraph. I have made my opinion clear, and I have an anecdote that backs up my statement without the slightest possibility of a doubt. Go home now, there is nothing more to see here. Ben on May 15, 2009 10:35 AM @Dennis From my understanding, 0.9999..... does not equal one, but is generally accepted and taken as correct to equal its value to 1 given that the spatial difference between the resulting 0.99999...-1, is infact so small that it becomes inconsequntial. Alvaro Fernandez on May 15, 2009 10:52 AM its all abt 16-bit and 32-bit thing.. calculator that u used is from Windows 3.11 or earlier.. use calculators of WIndows XP/Vista and see.. Dhaval Faria on May 15, 2009 11:05 AM For the excel dudes: SP1 fixed the issue. petko on May 15, 2009 11:10 AM your Excel 2007 example is wrong. It gives the right answer. That's cos MS developers are much better than google's smart arses Michelle Rodriguez on May 15, 2009 11:20 AM why do cars suck at driving? driver must know car's limitations, otherwise driving is not safe. lanG on May 15, 2009 11:26 AM your excel is crap Michelle Rodriguez on May 15, 2009 11:44 AM hahahaha, nice topic. . .well, they dont suck, Microsoft does.;) evan Varsamis on May 15, 2009 12:05 PM I thought that I am bad at math antifreeze on May 15, 2009 12:05 PM Just found out that 5.3 - 5.0 = 0.2999999999999998 for both Firefox and Safari's JavaScript engines. martoche on May 15, 2009 12:05 PM We know that the maximum of \sum_{n=1}^\inf x/10^n , x \in {1,2,3,4,5,6,7,8,9} is equal to 9* \sum_{n=1}^\inf 1/10^n. If we 'cheat' and use the definition of a convergent geometric series, that is for r 1, r^n converges to (1/(1-r)), 10^-n is equivalent to 1/9. Then, 9*(1/9) is equal to 1. Rob on May 15, 2009 12:59 PM Actually, it was Ariane-4.999... Shut the Float Up on May 15, 2009 1:00 PM where can I download the VM with Windows 3.0? hahahahaha securityhorror on May 15, 2009 1:23 PM I think what Darren meant was: starting with: x = .99999... 10x = 9.99999... (multiply each side by 10) 10x - x = 9.99999... - .99999... (subtract x or .99999... from each side) 9x = 9 (simplify each side's subtraction) x = 1 (divide by 9) Q.E.D. Wayne Goode on May 15, 2009 1:26 PM Here's how I've always thought of the .(9) = 1.0 argument. Basically there is no room between 1 and .(9). People say that you could fit .(0)1 in there, but you can't. Heh, right, I forgot many people claim that you could fit 0.0...01 in there. I think that understanding why 0.0...01 is nonsensical notation is equivalent understanding why 0.999... = 1. mkorman on May 15, 2009 1:26 PM actually you re saying it works on live search, but its capacity isnt infinite either, since : http://search.live.com/results.aspx?q=39999999999999999999+-+39999999999999999998go=form=QBREfilt=all this error is not due to the precise value of 399 999 999 999 999, it's just because it's a big number, nothing exceptional. btw nice proof Rob !! LoÔc on May 15, 2009 1:59 PM Floating-point problem has already been fixed by Dr Larry Nylund at the Institute of Mathematics and Statistics; http://www.math-solutions.org His intelligent implementations of desktop PC calculators have solve the floating-point problem. Dr. Larry Nylund's solutions are pretty good to solve math stuffs. These are perfect mathematical solutions for school, high school, university and engineers. Hans Adamsson on May 15, 2009 1:59 PM Who is the audience for this? You're always making these bad arguments and missing fundamental concepts. I recommend taking some courses in computer science; this is actually discussed in excruciating detail in any reputable introductory CS course. Charles on May 16, 2009 2:07 AM so funny! :) ABCoder on May 16, 2009 3:12 AM Spotlight calculates 399999999999999 - 399999999999998 just right :) I'm a Mac :P will on May 16, 2009 4:17 AM It's not the computer that has trouble - it's the programmer or the language. Try using scheme, which often implements precise arithmetic. Ted on May 16, 2009 7:26 AM I've accepted that 0.999... = 1 but I'm wondering what number would be just less than one to the infinitely-small number place? ignorant 8th grader on May 16, 2009 7:44 AM I'm using Ubuntu. I get a perfectly correct answer :) Jack on May 16, 2009 8:16 AM You could have at least used an example where people died. I.E. the Patriot missle in the Gulf War. TraumaPony on May 16, 2009 8:26 AM$ python
Python 2.5.2 (r252:60911, Jan 4 2009, 17:40:26)
[GCC 4.3.2] on linux2
from decimal import Decimal
print Decimal('399999999999999') - 399999999999998L
1
print 399999999999999L - 399999999999998L
1
Decimal(38.1) * Decimal(0.198)
Decimal(7.5438)
Decimal(0.1)
Decimal(0.1)

JMW on May 16, 2009 8:49 AM

Goodness, you can prove 0.(9)=1 just using GCSE maths to find what fraction a recurring decimal represents. Basically, multiply 0.9999... by the period of the recurrence (ie 10), to get 9.9999... Then subtract the original number from it. So you obviously have 9 as multiplying 0.9999... by 10 won't alter the nos after dec. pt. Becos 9.9999...=10 times the no, 9 = (10-1) times the no. So 0.9999... = 9/9 = 1. Worked this out without even looking at the wikipedia page.

JH on May 16, 2009 9:56 AM

i see.. any program as only a certain logic error or a bug.

http/it.alonearea.com

AloneArea.com on May 16, 2009 10:00 AM

@Ben
I hereby comment on this particular post by [author]. There are two possibilities:

1. [author] is right. Any person that has previously pointed out the truth of the post has done so without necessity, as [author] stated an obvious fact and thus only wasted my time. Any person planning to point out the truth of the post after I did so should instead agree with me, giving proper credit to my intellect. Any person that has previously claimed that [author] was wrong should consider themselves responded to with alternative 2.

2. [author] is wrong. Any person that has previously pointed out the falsity of the post has done so without necessity, as the author is obviously incompetent (concerning this particular matter, and thus concerning any topic whatsoever) and only wasted my time. Any person planning to point out the falsity of the post after I did so should instead agree with me, giving proper credit to my intellect. Any person that has previously claimed that [author] was right should consider themselves responded to with this very paragraph.

I have made my opinion clear, and I have an anecdote that backs up my statement without the slightest possibility of a doubt. Go home now, there is nothing more to see here.

So...in other words....as much as you want us to agree with you and go home...in both alternatives YOU wasted time. Well...I'm willing to agree with you on THAT ahahaha

@everyone
I am amazed of how this thing got blown out of proportions but 0.9999.... is =1 and to answer Daniel Kaplun and his frustration I ensure you, where I come from they teach this fact in ELEMENTARY SCHOOL.

There is an important difference to be made here. I agree that in virtually every programming course I took this issue is discussed, in some texts more than others...but like many other semi-theoretical disciplines (like sometimes physics, and especially math)in software engineering sometimes for OUR ease and peace of mind this little detail gets forgotten and such errors ARE present in many software programs considered otherwise bullet proof, and they go often undetected.
The very idea of relativity was conceived, by a man that we all know and revere....and we hold this principle to be true..and so far we have found it is so. However few people realize that before the important discovery of time-warp was made by Einstein HE TOO had to get rid of things like air-friction and aerodynamic forces to deduce the time-warp effect in his mental experiment of free falling objects that eventually led to the significant change of how we view the world of nature.

Every day in any school at any level you will see problems, even test question where they will say Omit the windchill factor... or Discard air friction...
It is in our very nature to try to make problems simple so that we can solve them. And this mindset has brought us great strides in understanding, technology, etc. I.e. it does not ALWAYS have to be perfect.

With that said, I recognize that this can be a problem when precise calculations are critical, and it should be and it HAS been addressed in those particular and special fields. But I think us common mortals are going to be fine with the variable sets we have.

And to those who criticized this author for pointing out this problem deeming it a waste of a blog well. I urge you to consider this is not paper and no trees were armed in the making of this post and stating the obvious has proven IMMENSELY useful in our scientific history.
Think about the origin of the word Eureka, spoken by Archimedes after witnessing a simple everyday life phenomenon (the raising of the water level when a body is immersed in it).

And remember the age old programmer's axiom the KISS concept...so please...let's!

_@EricDraven on May 16, 2009 10:48 AM

I'm sure someone else picked up on this: So any prospects of storing any infinitely repeating number on them are dim at best.

Wrong! You forget symbolic representation... I can store pi to perfect precision in a single byte! :)

Seriously though its good to be reminded that floating point numbers have all kinds of weirdness. The rounding errors are often missed by newbies, along with other things like how NAN and +/-INF react when put through conditionals. :)

Jheriko on May 16, 2009 1:40 PM

sorry for a double comment but i feel obliged to point out, all of the examples you give are bad programming and not the computer's fault. :)

the only thing i can think of along these lines is the old intel fpu bug with fdiv (long since fixed).

Jheriko on May 16, 2009 1:46 PM