0.999... = 1?
Background #
I was first introduced to the concept that \(0.\overline{9}\) (i.e., \(0.999...\)) is equal to \(1\) when my older brother posed it to me while walking around Paris on a family vacation in 2015. Both he and my mother took turns trying to convince me, but I was adamant that there is a difference and that they are wrong. I’ve increased my math knowledge significantly since, and I think that by far the greatest reason that I, and I assume many others, disagree with this equality is because it is truly ambiguous given their knowledge, even if they are not able to fully express it, and that there is a very simple way to dispel this ambiguity.
Disclaimer #
I am in no ways a mathematician, although I do regularly use calculus and statistics during my PhD research in the field of theoretical computer science (probabilistic data structures and algorithms).
The Chase #
Let’s cut to the chase. I think the primary reason that people reject the equality \(0.\overline{9} = 1\) is because they don’t understand that the reason we say \(0.\overline{9}\) is equal to \(1\) is ultimately a convention of how to represent numbers in decimal notation. This is not some fundamental property of the universe. The decimal system is great, but has clear limitations—rational numbers are under no obligation to have a finite representation. The canonical example of this is the number \(\frac{1}{3}\), which cannot be represented in decimal notation using a finite number of digits, and is instead represented as \(0.333...\) or \(0.\overline{3}\). Depending on the base used, different numbers may fall victim to this fate. For example, \(\frac{1}{10}\), while represented as a finite number in base 10 (\(0.1\)), is represented as \(0.0\overline{0011}\) in base 2.
People perhaps find it easier to accept that \(0.\overline{3}\) represents the number \(\frac{1}{3}\) because there is no simpler representation, while there is no reason to represent \(1\) as \(0.\overline{9}\). However, as a matter of convention, any repeating decimal is said to represent the smallest number that is greater than or equal every number in its infinite sequence, i.e., the sequence \(0.9\), \(0.99\), \(0.999\), \(0.9999\), etc. Under this convention, the smallest finite number greater than \(0.\overline{9}\) is \(1\), and thus the decimal \(0.\overline{9}\) represents the number \(1\).
Why it’s Not Obvious #
As I mentioned above, I had a hard time articulating why I disagreed with this equality when I was first introduced to it. When my brother asked me—what is the difference between \(1\) and \(0.\overline{9}\)?—I answered—\(0.\overline{0}1\). This is admittedly an abuse of standard decimal notation—but that’s kind of the point; this is just a notation. And my apprehension was touching upon a real point which I was finally able to articulate several years later when I took my first calculus course.
\[\begin{equation*} \lim_{n \to \infty}{\sum_{k=1}^{n} \frac{9}{10^k}} = 1 \end{equation*}\]Finally I had the language to express what I understood by \(0.\overline{9}\)—a number that approaches (but never reaches) \(1\). I think this is a common way of understanding this, simply because this is the way it is often explained by people who themselves don’t fully understand the concept. The only reason I am writing this post is because I recently saw a (one year old) Numberphile video where the mathematician themselves explained it this way. And important, that this is not what \(0.\overline{9}\) means; \(0.\overline{9}\) literally means 1 due to the standard convention of decimal notation. Let me explain with an example.
Wait, Where is it Different? #
Doesn’t the equation I wrote above literally say that this limit equals to 1? No. This is one of the confusing shorthands that mathematicians use. The equal sign in the limit does not really represent equality, certainly not in the traditional sense. Let’s state this limit in English:
The limit of \(0.9 + 0.09 + \cdots + \frac{9}{10^n}\) as \(n\) approaches \(\infty\) approaches \(1\). |
Note how I said “approaches” and not “equals”—people, including me, often say that a limit “equals” a number as a sloppy shorthand, but this is generally not strictly the case. Before my promised example, let’s start with a simpler limit, specifically showing that it’s not true that \(\lim_{x \to 1^-} x\) strictly equals \(1\).
\[\begin{equation*} \lim_{x \to 1^-} \frac{1}{1 - x} = \infty, \end{equation*}\]whereas \(\frac{1}{1 - x}\) is not defined at \(x = 1\), namely \(\frac{1}{1 - 1} = \frac{1}{0}\).
This is a clear difference in behavior between the limit as we approach \(1\) (from the left) and the value at \(1\) itself. And if you don’t like these undefined values, you can contrive piecewise functions, such as
\[\begin{equation*} \lim_{x \to 1^-} \begin{cases} 195 & \text{if } x < 1 \\ 0 & \text{if } x \ge 1 \end{cases} = 195. \end{equation*}\]The Promised Example #
Hopefully with this intuition you can start to see the problem with calling this a strict equality. Yes, yes, I used some tricky dividing by zero, undefined example, and another entirely contrived piecewise function, and you may still be convinced that there aren’t any practical differences. I still think these examples are good; even just the fact we approach \(1\) from the left as opposed to the right is an important distinction in the above two examples, but I digress. Here is perhaps a more convincing example:
\[\begin{equation*} \lim_{n \to \infty}{\left ( \sum_{k=1}^{n} \frac{9}{10^k} \right )^{10^n}} = \frac{1}{e} \end{equation*}\]There is no trickery here;1 we literally have the limit of the exact sequence in question, with nothing undefined at any point, and no non-continuous functions. Meanwhile, plugging in \(1\) instead of our limit, we simply have \(\lim_{n \to \infty} 1^{10^n}\) which clearly not only approaches \(1\) but is strictly equal to \(1\). For those interested in the proof, it is easy to show that: \(\sum_{k=1}^{n} \frac{9}{10^k} = 1 - \frac{1}{10^n}\). Letting \(x = 10^n\), we have
\[\begin{equation*} \lim_{x \to \infty}{\left ( 1 - \frac{1}{x} \right )^{x}} = \frac{1}{e}, \end{equation*}\]a well-known limit.2
Of course you can set the exponent higher to achieve even more drastic differences, e.g.,
\[\begin{equation*} \lim_{n \to \infty}{n f(n)^{11^n}} = \begin{cases} 0 & \text{if } f(n) = \sum_{k=1}^{n} \frac{9}{10^k} \\ \infty & \text{if } f(n) = 1 \end{cases}. \end{equation*}\]Again, zero trickery. If you plug in \(1\) here, you get infinity, while if you plug in our limit, \(\sum_{k=1}^{n} \frac{9}{10^k}\), you get zero. There is a difference!
Key Point #
If someone describes \(0.\overline{9}\) as the infinite sequence \(0.9, 0.99, 0.999, \ldots\), and they claim that this is equal to \(1\), they are wrong. Your gut feeling intuition is correct. Any explanation that can boil down to \(\lim_{n \to \infty}{\sum_{k=1}^{n} \frac{9}{10^k}}\) is incorrect. We have been gaslight on this too long.
The true reason that \(0.\overline{9}\) is equal to \(1\) is the most simple reason on the planet—we defined it that way. Any fancy math saying to multiply it by \(10\) to get \(9.\overline{9}\), and then subtracting \(0.\overline{9}\) from it to get \(9\), and finally dividing by \(9\) to get \(1\), or other tricks to claim to prove it, those are missing the point entirely. They don’t prove anything. They aren’t convincing. All they show is that the explainer either doesn’t understand the point of confusion, doesn’t really understand why \(0.\overline{9}\) is equal to \(1\), or, most likely, both.
Stating that \(0.\overline{9}\) technically represents \(1\) as a quirk of our decimal notation is perfectly fine, but I have never heard it introduced that way. It’s always with some bogus about infinite sequences and tries to make you seem silly for not understanding these, again wrong, proofs. Please leave a comment if you see some problem with this.
By the way, I do understand that if you evaluate the limit which approaches 1 before other limits, i.e., using two separate limits, then with our definition of the limit we obtain the same results as just plugging in \(1\). But again, this is just a result of our standard convention of limits, and the simple example of \(\frac{1}{1 - x}\) perhaps highlights that our convention is not trivial.
Alleged Points of Confusion #
Wikipedia cites the following key contributing factors as to why students reject this equality:3
- Students are often 'mentally committed to the notion that a number can be represented in one and only one way by a decimal'. Seeing two manifestly different decimals representing the same number appears to be a paradox, which is amplified by the appearance of the seemingly well-understood number 1.
- Some students interpret '0.999...' (or similar notation) as a large but finite string of 9s, possibly with a variable, unspecified length. If they accept an infinite string of nines, they may still expect a last 9 'at infinity'.
- Intuition and ambiguous teaching lead students to think of the limit of a sequence as a kind of infinite process rather than a fixed value since a sequence need not reach its limit. Where students accept the difference between a sequence and its limit, they might read '0.999...' as meaning the sequence rather than its limit.
All these points cite some books which I don’t have access to, so I’m not sure which evidence they use. Regardless, here are my thoughts:
-
I find this point pretty unconvincing. If you literally said, “we decided to also let \(0.\overline{9}\) exactly represent \(1\) because it helps us keep some things consistent, and that was just some decision we chose to made that didn’t need to be made that way,” I think people would have a much easier time accepting this.
-
I mean some people indeed interpret it as the limit of \(0.9 + 0.09 + \cdots + \frac{9}{10^n}\), and under that interpretation, it indeed is not equivalent to 1, as I explained above. I think this is a pretty common misunderstanding that is primarily caused because of how \(0.\overline{9} = 1\) is often explained.
-
I think this is certainly the most reasonable point. And if you state that the limit of an infinite sequence is equal to whatever it approaches, as is admittedly standard practice, then it’s true. However, I think examples like \(\frac{1}{1 - x}\) kind of intuitively show this definition of the limit is not trivial. Under the standard definition, \(\lim_{x \to 1^-} \frac{1}{1 - x} = \infty\), while \(\frac{1}{1 - \lim_{x \to 1^-} x} = \frac{1}{1 - 1} = \frac{1}{0}\) which isn’t defined. I again argue that this convention may be the fundamental point of confusion here, and people explaining this equality stating that this distinction is trivial and that there’s no other way to define it are missing the point.
Anyway, please let me know what you think, whether you agree or disagree, etc.
-
The remaining “trickery” here is that I didn’t separate both limits, as discussed above. I do maintain that this convention is just that—a convention, and while it certainly behaves consistently and plays nicely and is useful, it ultimately boils down to a choice. ↩
-
For those interested, one of the ways to define the number \(e\) is as the limit of the sequence \(\left ( 1 + \frac{1}{n} \right )^n\) as \(n\) approaches \(\infty\). Here is some very hand-wavy intuition: \((1 - \frac{1}{x})^x = \left [ (1 + \frac{1}{-x})^{-x} \right ]^{-1} = \left [ (1 + \frac{1}{n})^{n} \right ]^{-1} \to [ e ]^{-1}\), where we substituted \(n\) for \(-x\). ↩
-
Wikipedia didn’t enumerate these points, I did to make it easier to reference them. ↩