Euler's "blunder" revealed in rigorous detail:
Here is the "proof" for 0.(9) = 1, as provided by Euler, in which he claims that number manipulation can magically morph a "number" assigned as a constant, into another "number"
First off, since when did become a variable that we needed to solve for ?? It is a constant. Constants don't change their value. Try assigning a value to a constant in a computer program and see if the value ever "ends up as something else".
The huge blunder here is the combination of multiplying by 10 and subtracting a series from the resulting series, namely, 10 x 0.999... - 0.999...
There are other underlying assumptions too, but I addressed those earlier in the thread (specifically, multiplying an infinite series by anything which causes a carry or decimal shift)
Let's do 10n - n using the definition of n
Now, what Euler does, is subtract the 2 series,
...Except, what he failed to show everyone, or assumed was ok, was that he shifted the second series to the right by one decimal place:
How he does this is by doing to following:
I believe Colin pointed this out at one point and said this was "ok" to do. At first glance, the fractions are all equivalent... except...
...the SERIES has changes slightly:
[ Notice the starting index value. ]
This allows the subtraction to occur as :
instead of (The correct way)
is really equal to
So, by the Property of Addition and Subtraction of Summations:
we can see that we shouldn't shift the series, because that uses the starting index: instead of
Let's use the proposed theorem 17, to subtract the 2 series, with equal starting indices:
or, using the
Altered Series with using Theorem 17 properly:
Let's do a few terms, shall we?:
I believe you get the point now, that by changing the series, you change the results.
Therefore Euler mistakenly assumed that he could slide the series over to the right, and "cancel off" all the decimal digits, incorrectly so, in order to achieve his desired result of 9n = 9.
We are not allowed, during this proof, to use the definition of 0.(9) = 1 because the point of the proof was to prove that the 2 numbers are equal "only by doing some basic algebra".
It was a nifty trick that he used, to manipulate the series in order to "cancel off" all the decimals.
We know what results we should get at EVERY point in every equation, if we define x = a, then x always equals a, and I showed that by NOT assuming things as Euler did, we do end up with the proper result and not some illusion where (a) becomes (b) with some hocus pocus. If we started with x = 1, and ended up with x = 2, would THAT be ok too? I think not...
The conclusion is that 0.999... does not = 1 in the end. It equals 0.999... so he proved NOTHING, or, maybe the fact that extra care is due when performing "basic number manipulation" on an infinite series.
Another source of the Converging Series Identities. The difference of converging series is the same logic as summation of series: