Skip to main content.

Recently, while doing some computations, I stumbled about a very interesting identity, which I do not want to withhold from you all:

Theorem.

Let K be a field and x_1, \dots, x_n \in K such that 1 + \sum_{j=1}^i x_i \neq 0 for all i = 1, \dots, n. Then

1 - \sum_{i=1}^n \frac{x_i}{\Bigl( 1 + \sum_{j=1}^{i-1} x_j \Bigr) \Bigl( 1 + \sum_{j=1}^i x_j \Bigr)} = \frac{1}{1 + \sum_{i=1}^n x_i}.

It is very easy to prove it by induction:

Proof.

For n = 1, the left-hand side equals

1 - \frac{x_1}{( 1 ) ( 1 + x_1 )} = \frac{1 + x_1 - x_1}{1 + x_1} = \frac{1}{1 + x_1},

which equals the right-hand side for n = 1. Hence, the statement is true for n = 1. Now assume that it holds for n. Then

& 1 - \sum_{i=1}^{n+1} \frac{x_i}{\Bigl( 1 + \sum_{j=1}^{i-1} x_j \Bigr) \Bigl( 1 + \sum_{j=1}^i x_j \Bigr)} \ {}={} & 1 - \sum_{i=1}^n \frac{x_i}{\Bigl( 1 + \sum_{j=1}^{i-1} x_j \Bigr) \Bigl( 1 + \sum_{j=1}^i x_j \Bigr)} \\ {}-{} & \frac{x_{n+1}}{\Bigl( 1 + \sum_{j=1}^n x_j \Bigr) \Bigl( 1 + \sum_{j=1}^{n+1} x_j \Bigr)} \\ {}={} & \frac{1}{1 + \sum_{i=1}^n x_i} - \frac{x_{n+1}}{\Bigl( 1 + \sum_{j=1}^n x_j \Bigr) \Bigl( 1 + \sum_{j=1}^{n+1} x_j \Bigr)} \ {}={} & \frac{1 + \sum_{j=1}^{n+1} x_j}{\Bigl(1 + \sum_{i=1}^n x_i\Bigr) \Bigl( 1 + \sum_{j=1}^{n+1} x_j \Bigr)} - \frac{x_{n+1}}{\Bigl( 1 + \sum_{j=1}^n x_j \Bigr) \Bigl( 1 + \sum_{j=1}^{n+1} x_j \Bigr)} \\ {}={} & \frac{1 + \sum_{j=1}^n x_j}{\Bigl(1 + \sum_{i=1}^n x_i\Bigr) \Bigl( 1 + \sum_{j=1}^{n+1} x_j \Bigr)} = \frac{1}{1 + \sum_{j=1}^{n+1} x_j},

what we had to show.

Yet, I have no idea what this identity should tell me. The left-hand side looks so complicated, there is no indication it should simplify to something like the right-hand side. This identity miraculously appeared when I computed the Gram-Schmidt orthogonalization of the linearly independent system v_i = \lambda_i e_i + e_n, 1 \le i < n, where \lambda_i \in \R \setminus \{ 0 \} and e_1, \dots, e_n is the standard orthonormal base of \R^n. It turns out that one can explicitly describe the Gram-Schmidt orthogonalization, namely it is \hat{v}_1, \dots, \hat{v}_{n-1} with

\hat{v}_i = \lambda_i e_i + \frac{1}{1 + \sum_{j=1}^{i-1} \lambda_j^{-2}} \biggl( -\sum_{j=1}^{i-1} \lambda_j^{-1} e_j + e_n \biggr),

and the squared norm of \hat{v}_i is given by

\langle \hat{v}_i, \hat{v}_i \rangle = \lambda_i^2 \cdot \frac{1 + \sum_{j=1}^i \lambda_j^{-2}}{1 + \sum_{j=1}^{i-1} \lambda_j^{-2}}.

This one can also easily show by induction, using the above identity; it appears two times with x_i = \lambda_i^{-2}. Note that the system (v_1, \dots, v_{n-1}) already appeared once in this blog, namely when I tried to find the closest vector in its span to e_n; this was done in this post.

In case you have seen this identity before, let me know. I'm really curious if it has been used somewhere else.

Comments.

Gerard wrote on May 3, 2013:

Hey Felix,

If you write Y_0=1 and Y_i=1+\sum_{j=1}^i x_j, then the RHS becomes

1-\sum_{i=1}^n \frac{Y_i-Y_{i-1}}{Y_iY_{i-1}} ={} & 1-\sum_{i=1}^n \biggl(\frac{1}{Y_{i-1}}-\frac{1}{Y_i}\biggr) \\ {}={} & 1-\frac{1}{Y_0}+\frac{1}{Y_n} = \frac{1}{Y_n}

and you have it. Something slightly more insightful? You tell me :-)

G.

Felix Fontein wrote on May 3, 2013:

Hi Gerard,

yes, that indeed looks much better :-) I wonder why I overlooked that...

Thanks a lot!

Best, Felix