top | item 23004086

0.999...= 1

218 points| yurisagalov | 5 years ago |en.wikipedia.org

626 comments

order
[+] undecisive|5 years ago|reply
There is no proof that will ever satisfy a person dead-set against this. Ever since I brought this home from school as a child, my whole family ribbed me mercilessly for it.

If you tell a person that 3/6 = 1/2, they'll believe you - because they have been taught from an early age that fractions can have multiple "representations" for the same underlying amount.

People mistakenly believe that decimal numbers don't have multiple representations - which, in a way is correct. The bar or dot or ... are there to plug a gap, allowing more values to be represented accurately than plain-old decimal numbers allow for. It has the side effect of introducing multiple representations - and even with this limitation, it doesn't cover everything - Pi can't be represented with an accurate number, for example.

But it also exposes a limitation in humans: We cannot imagine infinity. Some of us can abstract it away in useful ways, but for the rest of the world everything has an end.

I wonder if there's anything I can do with my children to prevent them from being bound by this mental limitation?

[+] knzhou|5 years ago|reply
Personally I've always thought "proofs" using "arithmetic" are right, but kind of stated backwards.

The point is that in elementary school arithmetic, you define addition, multiplication, subtraction, division, decimals, and equality, but you never define "...". Until you've defined "...", it's just a meaningless sequence of marks on paper. You can't prove anything about it using arithmetic, or otherwise.

What the "arithmetic proofs" are really showing that if we want "..." to have certain extremely reasonable properties, then we must choose to define it in such a way that 0.999... = 1. Other definitions would be possible (for example, a stupid definition would be 0.999... = 42), just not useful.

What probably causes the flame wars over "..." is that most people never see how "..." is defined (which properly would require constructing the reals). They only see these indirect arguments about how "..." should be defined, which look unsatisfying. Or they grow so accustomed to writing down "..." in school that they think they already know how it's defined, when it never has been!

[+] baryphonic|5 years ago|reply
The way '...' is used here is perfectly consistent with being defined as a geometric series where the ratio between successive elements is 1/10 and the start term is the final digit. Geometric series always converge when the absolute value of the ratio of less than 1.

I should note that when I learned about rational & irrational numbers in elementary school (I think third or fourth grade), we used a "bar" notation where we'd put a bar over the last digits in a decimal expression that repeated forever (i.e. it corresponded exactly to a geometric series with r = (1 / 10)^k where k is the number of digits under the bar, though we didn't know about that at the time). Our teachers explained that the difference between a rational and irrational number was that there would be no pattern you could ever find in an irrational number that would allow us to use the bar, which is surprisingly accurate for grade school arithmetic.

[+] CogitoCogito|5 years ago|reply
> Personally I've always thought "proofs" using "arithmetic" are right, but kind of stated backwards.

I've never considered them right at all. By saying something like

0.9... x 10 = 9.9...

and then saying that

9.9... - 0.9... = 9

you're basically just a priori defining 0.9... to be 1. In other words you're basically just defining 0.9... as a symbol to be some number x which has the property that 10x - x = 9. So you're basically just defining it to be 1.

I've never seen a proof of 0.9... = 1 using Peano arithmetic which made any sense to me. I doubt one actually exists in any true logical meaning. Unless you're making use of limits, completeness, or something equivalent I don't see how a proof could possibly make any sense.

[+] agumonkey|5 years ago|reply
And I thought mathematicians would reject normal arithmetic operation over the domain of `N...` elements. They're always so ultra rigorous to classify what is or is not, what is defined, the domain .. but then they let an infinite sequence be treated as any finite number.
[+] ssivark|5 years ago|reply
> The point is that in elementary school arithmetic, you define addition, multiplication, subtraction, division, decimals, and equality, but you never define "...". Until you've defined "...", it's just a meaningless sequence of marks on paper. You can't prove anything about it using arithmetic, or otherwise.

Sure, but the point of "elementary school" arithmetic is not "elementary" arithmetic, as a mathematician would define it :-)

The goal is to teach people to reason by matching patterns. Deductive/Inductive reasoning can slowly proceed from that, as they try to frame their intuition for patterns into increasingly more general abstractions.

[+] tomtomtom1|5 years ago|reply
if we say that infinitesimals exist. that 1/3 != 0.33.. and 1 != 0.9999... and the probability of possible events is never 0.

what are the properties that we would lose?

[+] dwheeler|5 years ago|reply
A formally rigorous proof of this (in Metamath) is here:

http://us.metamath.org/mpeuni/0.999....html

Unlike typical math proofs, which hint at the underlying steps, every step in this proof only uses precisely an axiom or previously-proven theorem, and you can click on the step to see it. The same is true for all the other theorems. In the end it only depends on predicate logic and ZFC set theory. All the proofs have been verified by 5 different verifiers, written by 5 different people in 5 different programming languages.

You can't make people believe, but you can provide very strong evidence.

[+] jl2718|5 years ago|reply
The proof relies on the assertion that the supremum of an increasing sequence is equal to the limit. This is mathematical dogma, and should be introduced as such. Once that is accepted, it becomes obvious.

This is illustrative of what I see as a fundamental problem in mathematics education: nobody ever teaches the rules. In this case, the rules of simple arithmetic hit a dead end for mathematicians, so they invented a new rule that allowed them to go further without breaking any old rules. This is generally acceptable in proofs, although it can have significant implications, such as two mutually exclusive but otherwise acceptable rules causing a divergence in fields of study.

When I was taught this, it was like, “Look how smart I am for applying this obtusely-stated limit rule that you were never told about.” This is how you keep people out of math. The point of teaching it is to make it easy, not hard.

[+] andrewprock|5 years ago|reply
This is in large part due to the difficulty with reasoning about infinite representations. You do have to add axioms to your system to be able to reason about 0.9999...

Stating that 0.9999... = 1 without exposing these new tools meant to grapple with concepts that physically cannot be grappled with is a huge mistake.

[+] erodommoc|5 years ago|reply
And this I think is the real issue. When someone says that 0.999... = 1.0, what they are saying is that this is true given a number of assumptions that we are taking for granted that would not be obvious to a non-mathematician. There's a lot of math hiding in those '...'.
[+] supercasio|5 years ago|reply
What? 0.999... = 1 is not dogma. Please don't spread misinformation. And at least read the link before commenting on something.
[+] ginko|5 years ago|reply
I remember being doubtful when being presented with this in middle school, but after being shown this as fractions makes it obvious:

      1/3 =     0.333..
  3 * 1/3 = 3 * 0.333..
      3/3 =     0.999..
        1 =     0.999..
[+] larschdk|5 years ago|reply
I don't mean to troll you, but if you were doubtful that 0.999... = 1, then you should also be doubtful that 0.333.. = 1/3. Any argument that 0.999... is not quite 1 can also be used to argue that 0.333... is not quite 1/3.

I think it's mostly a matter of definition, since mathematicians consider sums of infinite series equal to their limit (if it's finite), i guess for many practical reasons. If you accept this, then 0.999... = 1. If you don't, then 0.999... can't be assigned a value (but converges to 1), which may be the intuitive understanding of infinite series for some.

[+] iso1631|5 years ago|reply
Another secondary school 'proof'

  x = 0.9999.....
  10x = 9.9999.....
  (10x -x) = 9x = (9.9999.... - 0.9999....) = 9
  x = 9/9 = 1
[+] captainmuon|5 years ago|reply
Well, if I could choose I wouldn't personally accept 1/3 = 0.333... . But rather, I'd say it equals a limit:

    1/3 = lim(N -> oo) 0.3{N}     (3 is N times repeated)
Especially I would distinguish between infinitely many threes, and N threes, where N goes to infinity. In the first case, you would still be missing an infinitisimal amount, in the latter case you have the usual situation and the sequence has the least upper bound of 1/3.

When you are calculating a limit, you can never just plug in the value for N (say if N is in the denominator and the limit goes to 0). Why should you be able to do this when N is infinity?

At least this is my personal justification why I find non-standard reals interesting. They also justify the nice calculation method where you can cancel out 'dx'es from fractions.

[+] johnhattan|5 years ago|reply
I remember a conversation I had with my daughter in the car when she was starting out with algebra...

Me: Is 9.999... the same as 10, or is it just really close to 10?

Kid: Really close. It never gets all the way there.

Me: Well then how close? What do you get when you subtract 9.999... from 10?

Kid: (pause) An infinite number of zeroes. . .and then a one. . .wait, you can't do that.

Me: Right. You just have an infinite number of zeroes. Which is zero.

Kid: (pause) Oh, that's mind-blowing.

[+] cesarb|5 years ago|reply
I personally like using fractions of 9.

  1/9 = 0.111...
  2/9 = 0.222...
  3/9 = 0.333...
  ...
  8/9 = 0.888...
  9/9 = 0.999...
What's neat is that this trick works for any repeating decimal, with any number of digits in the repeating part. For instance:

  123/999 = 0.123123123...
  999/999 = 0.999999999...
Multiply or divide by powers of 10 as necessary to shift the decimal point, and add the non-repeating part.

Once you accept this mapping, it's trivial to treat 0.999... as 9/9 (or 99/99, or 999/999, etc). Which can be simplified to 1.

[+] ssivark|5 years ago|reply
Nice to see a few different proofs/intuitions here. Not being a big fan of symbol manipulation, I always felt partial to the proof/intuition that you couldn't find another number between the two :-)
[+] ant6n|5 years ago|reply
Well, here you reduced 1=.9999... to 1/3=0.333... What if I don’t believe that second equation.
[+] emerongi|5 years ago|reply
This also happens to be the test for whether your calculator is any good.
[+] ping_pong|5 years ago|reply
My 5 year old stumped me with this, and I had to look it up. He asked me why 1/3 + 1/3 + 1/3 = 1, since it's equal to 0.333... + 0.333... + 0.333... which is 0.999... How can that possibly equal 1.000...? And is 0.66... equal to 0.67000...?

I didn't have a good enough answer for him, so I had to look it up and found this page. I tried to explain it to him but since I'm a terrible teacher and he's only 5, it was hard for me to convince him. Luckily he has many years before it matters!

[+] rudolph9|5 years ago|reply
> He asked me why 1/3 + 1/3 + 1/3 = 1, since it's equal to 0.333... + 0.333... + 0.333... which is 0.999... How can that possibly equal 1.000...? And is 0.66... equal to 0.67000...?

This would make me very proud.

[+] sebringj|5 years ago|reply
Is this problem simpler than we want it to be? Meaning 1/3 is a concept stating there is 1 part of 3 total. If you have 3 total parts, added then it is a whole. Trying to shoe-horn it into the decimal system, similarly to try to represent pie as a clean number into the decimal system etc. Isn't the issue representing the number in one for and another, not the actual logic of the issue? idk
[+] iamgopal|5 years ago|reply
5 year old is curious or asking such question is mind blowing.
[+] naringas|5 years ago|reply
> Luckily he has many years before it matters!

and depending on career choices, it might never matter at all.

[+] parski|5 years ago|reply
I asked my math teacher this when I was a kid. He told me to accept that's the way it is so I did.
[+] bognition|5 years ago|reply
No .666666 is not equal to .6700000

0.666... is equal to 0.666...7

[+] klodolph|5 years ago|reply
An interesting consequence of this in proofs.

You’ll see various proofs involving real numbers that must account for the fact that 0.999…=1.0. There are, of course, many different ways to construct real numbers, and often it’s very convenient to construct them as infinite sequences of digits after the decimal. For example, this construction makes the diagonalization argument easier. However, you must take care in your diagonalization argument not to construct a different decimal representation of a number already in your list!

[+] rini17|5 years ago|reply
I never understood the fixation on diagonalization. Why can't ever exist another way for mapping any set to countables?
[+] bytedude|5 years ago|reply
Flame wars over this used to be common on the internet. People intuitively have the notion that the left side approaches 1, but never actually equals it. They see it as a process instead of a fixed value. Maybe the notation is to blame.
[+] jhanschoo|5 years ago|reply
The intuition is right, and the mathematical definition relies on the intuition. It's just that people haven't been exposed to the actual definition when it comes to real numbers.

Mathematically, mathematicians prove that there is a unique number that this process goes to, (and not, say, two distinct numbers), and define the notation to represent this unique number.

[+] tgv|5 years ago|reply
Repetition can easily be seen as a process, which would indeed approach 1. But I think the idea of infinite repetition is very hard to get.
[+] username90|5 years ago|reply
The intuition that there is something in between isn't really wrong, it make sense and they work, otherwise physicists wouldn't be able to work with them. So that intuition is correct, it is mathematicians who just don't understand it fully yet. Maybe fully formalizing this is what unlocks the final piece keeping us from creating a unified theory in physics?
[+] orthoxerox|5 years ago|reply
I remember WarCraft 3 official forums being torn apart by this, with probably thousands of comments in the thread. Blizzard even had to post their official stance on the issue, but that didn't calm those who insisted 0.999... was 1 minus epsilon and not exactly 1.
[+] jcfields|5 years ago|reply
I'm glad someone else remembers this. To this day, whenever I see 0.999... = 1, I think of the Battle.net forums inexplicably flooded with threads about it for what felt like ages.
[+] steerablesafe|5 years ago|reply
Maybe the major source of confusion is that our decimal representation for whole numbers is supposed to be unique. Then when we extend it to rationals and reals this property fails at rationals in the form of a/10^n.

Arguably the sign symbol ruins it for whole numbers as well, as +0 and -0 could be equally valid representations of the number 0. We just conventionally don't allow -0 as a representation. There are other number representations that don't have this problem.

[+] onion2k|5 years ago|reply
I think the source of confusion is that people can't cope with recurring numbers. When someone says .999...=1 the listener assumes that the 0.999... stops at some point, and if that happens it'll always be below 1 because they can imagine adding another 9. Essentially, people actually ignore the "..." because that's the hard part.
[+] virgilp|5 years ago|reply
Right - I also find it easier to say that really, "1" is just a different/shorthand notation for 0.(9) It's not "two different, but equal numbers" - it's two different notations for the same number. Like how you can write same number in different ways in different bases - this is just writing the same number, in the "infinite number of decimals" vs "natural" way.
[+] baddox|5 years ago|reply
I don’t know, I don’t recall students having much trouble accepting that 1 = 1.0 = 1.000.
[+] heinrichhartman|5 years ago|reply
0.9999 = 1 is a consequence of the way we define rational and real numbers and limits. There are alternative definitions of numbers where this equality does not hold: Non Standard Analysis https://en.wikipedia.org/wiki/Nonstandard_analysis being the most famous one.

But for the sake of argument, let's just define numbers as sequences of digits with a mixed in period somewhere:

    MyNumber := {
      a = (a_1, a_2, ...) -- list of digits a_i = 0 .. 9; a_1 != 0.
      e -- exponent (integer)
      s -- sign (+/- 1)
    }
Each such sequence corresponds to the (classical) real number: s * \sum_i a_i * 10^{i + e}.

We can go on and define addition, subtraction, multiplication and division in the familiar way.

Problems arise only when we try to establish desireable properties, e.g.

(1/3) * 3 = 1

Does NOT hold here, since 0.9999... is a difference sequence than 1.000....

So yes, you can define these number systems, and you will have 0.999... != 1. But working with them will be pretty awkward, since a lot of familiar arithmetic breaks down.

[+] ltbarcly3|5 years ago|reply
This is 'more intuitive' if you think about it this way:

If any two real numbers are not equal, then you can take the average and get a third number that is half way between them. Conversely, if the average of two numbers is equal to either of the numbers, then the two numbers are equal. (this isn't a proof, just a way to convince yourself of this)

What's the average of .9999... and 1?

[+] sleepyams|5 years ago|reply
There is a nice characterization of decimal expansions in terms of paths on a graph:

Let C be the countable product of the set with ten elements, i.e. {0, 1, 2, ..., 9}. The space C naturally has the topology of a Cantor set (compact, totally disconnected, etc). Furthermore, for example, in this space the tuples (1, 9, 9, 9, ...) and (2, 0, 0, 0, ...) are distinct elements.

The space C can also be described in terms of a directed graph, where there is a single root with ten outward directed edges, and each child node then has ten outward directed edges, etc. C can be thought of as the space of infinite paths on this graph.

A continuous and surjective map from C to the unit interval [0, 1] can be constructed from a measure on these paths. For any suitable measure, this map is finite-to-one, meaning at most finitely many elements of C are mapped to a single element in the interval. For example there is a map which sends (1, 9, 9, ...) and (2, 0, 0,....) to the element "0.2".

The point is that all decimal expansions of elements of [0, 1] can be described like this, and we can instead think of the unit interval not as being composed of numbers _instrinsically_, but more like some kind of mathematical object that _admits_ decimal expansions. The unit interval itself can be described in other ways mathematically, and is not necessarily tied to being represented as real numbers. Hope this helps someone!

[+] cjfd|5 years ago|reply
Ultimately this is more the definition of R than that it is a theorem. One can also work with sets of numbers in which the completeness axiom does not hold. E.g., sets of numbers in which one also has infinitesimals.
[+] calibas|5 years ago|reply
And this is why I prefer hyperreals.

0.999... = 1 - 1/∞

We talk about infinity all the time in mathematics, teachers use the concept to introduce calculus in a way that people can more easily understand, but using infinity directly is almost universally banned within classrooms.

Nonstandard analysis is a much more intuitive way of understanding calculus, it's the whole "infinite number of infinitely small pieces" concept, but you're allowed to write it down too.

[+] russellbeattie|5 years ago|reply
I'll just chime in with my completely ignorant theory that 1 - 0.999... = the infinitely smallest number, but is still, in my mind, regardless of any logic, reason, or educated calculations, greater than 0.

I understand and accept this is wrong. However, somewhere in my brain I still believe it. Sort of like +0 and -0, which are also different in my head.

[+] JJMcJ|5 years ago|reply
Usually the concept of a limit, which assigns a meaning to 0.999..., isn't studied until calculus.

There are approaches to mathematics that avoid infinite constructions, and a "strict finitist" would not assign 0.999... a meaning.

The stunning success of limit based mathematics makes finitism a fringe philosophy.

Remember, class, for every epsilon there is a delta.

[+] traderjane|5 years ago|reply
Professor N.J. Wildberger is probably among the most well known "ultrafinitist" on YouTube.

https://www.youtube.com/watch?v=WabHm1QWVCA

I mention him because I would think he sympathizes with those who have concern over the meaning of this kind of notation.

[+] sv_h1b|5 years ago|reply
0.999...=1 is true in the mathematical sense, period.

However as a representation of physical world, there is a caveat. What we understand is physical world appears and behaves discretely, because at planck scale (approx. 10^-35) the distances seem to behave discretely.

Although common people don't know/ understand planck scale, they do grasp this concept intuitively. What they are really saying is that in physical world there's some small interval (more precisely, about[1 - 10^-35, 1]) which can't be subdivided further, based on our current knowledge.

Same thing applies to planck time (approx. 5 * 10^-43) too.

So people are arguing two different things - the pure maths concept, or the real world interpretation.

[+] sebringj|5 years ago|reply
The thing that helps me "understand" it is that the universe has finite sizes of things like the Planck length for example being a theoretical thing at the smallest distance I would imagine. Now imagine it going smaller than the Planck length (finite) in terms of the difference of .9 repeating and 1 since infinitely small differences can do that. Essentially there is no way to tell the difference between .9 repeating and 1 then from a practical or theoretical perspective of measurement. So not imagining infinity lets us at least imagine smaller than the smallest measurable thing.