Skip to main content.

Yesterday, I managed to compute a determinant of a certain matrix. This matrix appears in the work I'm currently doing, and having an explicit formula for it allowed me to explicitly solve a system of linear equations using Cramer's rule. In this post, I want to present the result with two different proofs by induction.

I wouldn't be surprised if this is already well-known, but at least I didn't see it before. Anyway, if you have seen it before, please feel free to tell me about it.

Edit: in fact, the theorem follows from the more general result stated here. Also see the comments.

Theorem.

Let K be a field and x_1, \dots, x_n \in K. Then the matrix

M(x_1, \dots, x_n) := \Matrix{ 1 + x_1 & 1 & \cdots & 1 \\ 1 & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & 1 \\ 1 & \cdots & 1 & 1 + x_n } \in K^{n \times n}

has determinant

\prod_{i=1}^n x_i + \sum_{j=1}^n \prod_{i \neq j} x_i.

Before I present the proofs, I want to say a few words on how this matrix comes up. Consider the linear system A x = b, where

A = \Matrix{ y_1 & & 0 \\ & \ddots & \\ 0 & & y_n \\ 1 & \cdots & 1 } \in \R^{(n + 1) \times n} \quad \text{and} \quad b = \Matrix{ 0 \\ \vdots \\ 0 \\ 1 } \in \R^{n+1}.

Assuming that all y_i \neq 0, one sees that this system has no solution. Instead, one can try to find a vector x which minimizes \| A x - b \|_2. A well-known fact in Linear Algebra says that this is the case for a unique x, and that x is the solution to the system A^T A x = A^T b. Now A^T A = M(y_1^2, \dots, y_n^2) and A^T b = (1, \dots, 1)^T, whence we can describe the unique solution x = (x_1, \dots, x_n)^T using Cramer's rule by

x_i = \frac{\det M(y_1^2, \dots, y_{i-1}^2, 0, y_{i+1}^2, \dots, y_n^2)}{\det M(y_1^2, \dots, y_n^2)}.

Now the above theorem gives an easy to evaluate formula for the determinants, namely

x_i = \frac{\prod_{j \neq i} y_j^2}{\prod_{j=1}^n y_j^2 + \sum_{k=1}^n \prod_{j \neq k} y_j^2}.

The solution vector x can be evaluated in O(n) operations, and in case the y_i are rational numbers, one can hence efficiently compute an exact (i.e. rational) solution.

Proof (First proof).

For n = 1, 2 this is easy to verify. Hence, assume that n \ge 3 and that the statement is true for n - 1. Using the multilinearity of the determinant and Lagrange expansion, both for the first row, we see that

\det M(x_1, \dots, x_n) = x_1 \det M(x_2, \dots, x_n) + \det M(0, x_2, \dots, x_n).

The same argument applied to the second row of the second determinant, we obtain that the second determinant

x_2 \det M(0, x_3, \dots, x_n) + \det M(0, 0, x_3, \dots, x_n).

Since M(0, 0, x_3, \dots, x_n) has two identical rows – namely the first two –, its determinant is 0. Moreover, by induction hypothesis, \det M(0, x_3, \dots, x_n) = \prod_{i=3}^n x_i and

\det M(x_2, \dots, x_n) = \prod_{i=2}^n x_i + \sum_{j=2}^n \prod_{i=2 \atop i \neq j}^n x_i.

Plugging this in yields

\det M(x_1, \dots, x_n) = x_1 \prod_{i=2}^n x_i + x_1 \sum_{j=2}^n \prod_{i=2 \atop i \neq j}^n x_i + \prod_{i=2}^n x_i,

which shows the claim.

Proof (Second proof).

For n = 1 this is clear. Hence, assume that n > 1 and that the statement is true for n - 1. We do a Lagrange expansion by the last column. This yields a term (-1)^{n + n} (1 + x_n) \det M(x_1, \dots, x_{n-1}), which by induction hypothesis equals

\prod_{i=1}^{n-1} x_i + \sum_{j=1}^{n-1} \prod_{i=1 \atop i \neq j}^{n-1} x_i + \prod_{i=1}^n x_i + \sum_{j=1}^{n-1} \prod_{i=1 \atop i \neq j}^n x_i.

The other terms are of the form (-1)^{n + i} \det M_i, where M_i is M(x_1, \dots, x_n) with the last column and the i-th row removed. Note that by cyclically permuting rows i to n - 1 of M_i, we obtain the matrix M(x_1, \dots, x_{i-1}, 0, x_{i+1}, \dots, x_{n-1}). The permutation has sign (-1)^{n - i + 1}, whence we see that

(-1)^{n+i} \det M_i = -\det M(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_{n-1}) = -\prod_{j=1 \atop j \neq i}^{n-1} x_j.

Combining everything so far, we see that

& \det M(x_1, \dots, x_n) \\ {}={} & \prod_{i=1}^{n-1} x_i + \sum_{j=1}^{n-1} \prod_{i=1 \atop i \neq j}^{n-1} x_i + \prod_{i=1}^n x_i + \sum_{j=1}^{n-1} \prod_{i=1 \atop i \neq j}^n x_i - \sum_{i=1}^{n-1} \prod_{j=1 \atop j \neq i}^{n-1} x_j \\ {}={} & \prod_{i=1}^n x_i + \sum_{j=1}^n \prod_{i=1 \atop i \neq j}^n x_i,

what we wanted to show.

Comments.

Lin wrote on June 18, 2011:

I remembered I saw this determinant before, in my linear algebra course. You may also check with D. Bernstein's . However, the appearace of this determiant you expained seems new to me

Felix Fontein wrote on June 20, 2011:

I just noticed that this determinant fits into a more general scheme: the matrix can be written as the sum of D = diag(x_1, \dots, x_n) and u v^T, where u = v is the vector with all coefficients being 1. According to planetmath.org, we have \det(D + u v^T) = \det D + v^T D^\# u for arbitrary D \in K^{n \times n}, u, v \in K^n, where D^\# is the adjugate of D. For a diagonal D, D^\# is a diagonal matrix with the i-th diagonal entry being \prod_{j \neq i} x_j, whence one obtains the result I've shown above.