# On a Certain Determinant.

Yesterday, I managed to compute a determinant of a certain matrix. This matrix appears in the work I'm currently doing, and having an explicit formula for it allowed me to explicitly solve a system of linear equations using Cramer's rule. In this post, I want to present the result with two different proofs by induction.

I wouldn't be surprised if this is already well-known, but at least I didn't see it before. Anyway, if you have seen it before, please feel free to tell me about it.

Edit: in fact, the theorem follows from the more general result stated here. Also see the comments.

Theorem.

Let be a field and . Then the matrix

has determinant

Before I present the proofs, I want to say a few words on how this matrix comes up. Consider the linear system , where

Assuming that all , one sees that this system has no solution. Instead, one can try to find a vector which minimizes . A well-known fact in Linear Algebra says that this is the case for a unique , and that is the solution to the system . Now and , whence we can describe the unique solution using Cramer's rule by

Now the above theorem gives an easy to evaluate formula for the determinants, namely

The solution vector  can be evaluated in operations, and in case the are rational numbers, one can hence efficiently compute an exact (i.e. rational) solution.

Proof (First proof).

For this is easy to verify. Hence, assume that and that the statement is true for . Using the multilinearity of the determinant and Lagrange expansion, both for the first row, we see that

The same argument applied to the second row of the second determinant, we obtain that the second determinant

Since has two identical rows – namely the first two –, its determinant is 0. Moreover, by induction hypothesis, and

Plugging this in yields

which shows the claim.

Proof (Second proof).

For this is clear. Hence, assume that and that the statement is true for . We do a Lagrange expansion by the last column. This yields a term , which by induction hypothesis equals

The other terms are of the form , where is with the last column and the -th row removed. Note that by cyclically permuting rows to of , we obtain the matrix . The permutation has sign , whence we see that

Combining everything so far, we see that

what we wanted to show.

## Comments.

wrote on June 18, 2011:

I remembered I saw this determinant before, in my linear algebra course. You may also check with D. Bernstein's . However, the appearace of this determiant you expained seems new to me

wrote on June 20, 2011:

I just noticed that this determinant fits into a more general scheme: the matrix can be written as the sum of and , where is the vector with all coefficients being 1. According to planetmath.org, we have for arbitrary , , where is the adjugate of . For a diagonal , is a diagonal matrix with the -th diagonal entry being , whence one obtains the result I've shown above.