the problem with this proof is that it
assumes root ((((1 + root (((1 + root ((1 + root (1 + ...))))
converges to a finite number phi. for example, how do we know it doesnt shoot off to infinity instead?
allow me to illustrate my point by considering a similar "repetition":
((( (...) - 1)^2 - 1)^2 - 1)^2
we could define this as the limit of the sequence a
n with the following recursion formulae:
a
0 = 0
a
n+1 = (a
n - 1)^2
if we
assume this sequence has a finite limit L then by limit laws:
L = (L - 1)^2
which implies:
L = [3 +/- root(5)] / 2
L ~ 2.618, L ~ 0.382
this is nonsense. if instead we look at each iteration, we observe that the sequence oscillates between 0 and 1 and does not converge:
a
0 = 0
a
1 = 1
a
2 = 0
.
.
.
this example shows that its possible to obtain an incorrect limit just by simply assuming the sequence converges. returning back to the infinite square root problem, we must first be sure that it does indeed have a finite limit before we try to manipulate any limit algebra.
to do this we have to prove 2 points:
(a) that the sequence is
monotonically increasing (a
n+1 > a
n for all n) and thus cannot oscillate.
(b) that there is a constant number bigger than any value of the sequence for all n (like an asymptote which "blocks" the sequence from diverging to infinity)
if we can show that
(a) and
(b) are true, we have effectively shown the sequence must converge to a finite limit.
define the infinite square root as the limit of the sequence a
n with the following recursion formulae:
a
0 = 0
a
n+1 = root(1 + a
n)
--------(1)
by inspection: a
n < a
n+1 (
(a) is true)
rearranging
(1) for a
n and subbing into this inequality yields:
a
n+1^2 - 1 < a
n+1
a
n+1^2 - a
n+1 - 1 < 0
this implies a
n+1 < [1 + root(5)]/2 (using the quadratic equation)
a
n < a
n+1 < [1 + root(5)]/2
(b) is true as the sequence is
always less than the constant [1 + root(5)]/2
therefore, the sequence must converge to some finite number, phi, and we can only now apply limit laws:
phi = root(1 + phi)
phi = [1 + root(5)]/2